AMD or Intel as well as the memory brand - page 33

 

A little hike in the prices of...


CPU:
Core i5-750 OEM <2.66GHz, 4.8 GT/s, 8Mb, LGA1156 (Lynnfield)>
7493.00р.
Core i7-860 OEM <2.80GHz, 4.8 GT/s, 8Mb, LGA1156 (Lynnfield)> 10825.00p.
10825.00р.
Motherboard:
MSI P55-GD65 <S-1156, iP55, DDR3, 2*PCI-E 16x, SATA, GLan, ATX, Retail>
5964.00р.
Opera, 4 slabs can ..:
DDR3 2048Mb (pc-10600) 1333MHz Kingston, Kit of 2 <Retail> (KVR1333D3N9K2/2G)
1609.00р.
DDR3 2048 Mb (pc-10660) 1333MHz Hynix original
1559.00р.
DDR3 4096Mb (pc-10600) 1333MHz Kingston, Kit of 2 (KVR1333D3N9K2/4G)
3048.00р.
=========================================

8GB 4x1604=6416p.
and5 7493r.
motherboard 5964r.
-------------------
19873р.

16gb 4x3048=12192r.
and5 7493r.
motherboard 5964r.
-------------------
25649р.

8gb 4x3048=6416r.
i7 10825r.
motherboard 5964r.
-------------------
23205р.

16gb 4x3048=12192r.
and7 10825r.
motherboard 5964r.
-------------------

28981р.



The prices are chosen from the lowest price range, and sometimes the only choice...

 
kombat >> :
...

http://www.overclockers.ru/hardnews/34321.shtml

 
BLACK_BOX >> :

Therefore, products from these manufacturers should not differ in the MEDIUM, they differ only in their position on the price shelf in proportion to the amount of money spent on advertising.

In MEDIUM we are NOT interested. The fact that they have PR managers of the same level - well, okay, one. So what?

 

I'd like to add a few cents as well :)

To check the reliability of processors under maximum load LinX and Prime95 should be used. Both of them are available in 32-bit and 64-bit versions. They warm up (in the truest sense of the word) the processor better than most other applications, even if they give 100% load.

For memory testing - testmem86 (or testmem86+).

Measuring speed of a software strategy tester with a script is pure synthetics, not very applicable in practice. However, there is a certain dependence, of course.

By the way, the second test (a primitive Expert Advisor on the history of 1 day) is also not representative. Do you often optimize on the daily "history"? And the annual history will have a much larger size and practically for sure will not fit into the cache of the processor. The EURUSD history obtained using Alpari from 04.01.99 through 20.09.09 takes 150 MB, i.e. approximately 14 MB per year. Besides, during testing with ticks modelling the memory size consumed by the terminal increases significantly (several times) that should also be considered.

That is why I suggest to do the following: take a longer period (e.g. a year) and measure not the optimization time but the time of testing of a quite algorithmic complex Expert Advisor. The Expert Advisor can also be posted here. Thus, we will be able to study the performance of various processors in conditions that are closer to "combat". In fact, the difference between testing and optimization lies only in the fact that during testing the terminal displays not only the number and total performance of the EA but also each individual order. But (presumably) it will have less than 0.1% because the call time of corresponding API functions is measured in microseconds. But here we will see more clearly the impact of speed and architecture of cache, buses and memory, as well as pipeline resetting on branching (and its prediction algorithms)

There is another research option. You can use Intel VTune to search for "jams" in EA execution in the terminal.

 
Docent >> :

I'd like to add a few cents as well :)

To check the reliability of processors under maximum load LinX and Prime95 should be used. Both of them are available in 32-bit and 64-bit versions. They warm up (in the truest sense of the word) the processor better than most other applications, even if they give 100% load.

For memory testing - testmem86 (or testmem86+).

Measuring speed of a software strategy tester with a script is pure synthetics, not very applicable in practice. However, there is a certain dependence, of course.

By the way, the second test (a primitive Expert Advisor on the history of 1 day) is also not representative. Do you often optimize on the daily "history"? And the annual history will have a much larger size and practically for sure will not fit into the cache of the processor. The EURUSD history obtained using Alpari from 04.01.99 through 20.09.09 takes 150 MB, i.e. approximately 14 MB per year. Besides, during testing with ticks modelling the memory size consumed by the terminal increases significantly (several times) that should also be considered.

That is why I suggest to do the following: take a longer period (e.g. a year) and measure not the optimization time but the time of testing of a quite algorithmic complex Expert Advisor. The Expert Advisor can also be posted here. Thus, we will be able to study the performance of various processors in conditions that are closer to "combat". In fact, the difference between testing and optimization lies only in the fact that during testing the terminal displays not only the number and total performance of the EA but also each individual order. But (presumably) it will have less than 0.1% because the call time of corresponding API functions is measured in microseconds. But here we will see more clearly the impact of speed and architecture of cache, buses and memory, as well as pipeline resetting on branching (and its prediction algorithms)

There is another research option. You can use Intel VTune to search for "bottlenecks" when executing the Expert Advisor in the terminal.

You are not attentive - it is optimized in 1 year on minutes.

Thank you for the test programs. If you give us the links - thank you again.

On the subject of traffic jams. With large volumes, the main traffic jam is memory. In 32 bit versions 3 gigs and that's it. You have to communicate with the disk, and this is very slow - the speed drops dozens of times. Assumption (no such machine at hand): 64 bits and a lot of RAM will save the father of Russian democracy.

 

Вы не внимательны - оптимизируется за 1 год на минутках.

My bad, I withdraw my offer.

References:LinX(mirror), Prime 95 32-bit, Prime95 64-bit, and another mistake, not testmem86(+), but memtest86 and respectively memtest86+.

Concerning jams. With large volumes the main bottleneck is memory. In 32-bit versions 3 gigs and that's it. We have to communicate with disk, and this is very slow - speed drops dozens of times. Assumption (no such machine at hand): 64 bits and a lot of RAM will save the father of Russian democracy.

The tester consumes about 700 MB of memory during a 10-year history run, so the architectural limitations of 32-bit addressing don't play a role here.

I was referring to the "microarchitectural" nature of the bottlenecks - misses and unnecessary prefetching in caches, bus loading, incorrect transition prediction, etc.

 

Thank you.

About the plugs. I didn't really get into it, I was just observing the disk accesses. With small tasks there are no accesses to it during optimization - fast everything. After a certain threshold the disk starts to be actively used and... deadlock. For a long time. That's why I count on 64 bits. Well, don't buy SDD because of it!!! And RAM will be faster anyway.

 
BLACK_BOX >> :

The frequency of ONE core is on the order of 1.1GHz. Although the datasheet at the time said (I'm no expert, I repeat)

1.1GHz * 2 cores = 2.2GHz.(CPU-Z screenshots)

The arithmetic here, of course, is kind of weird, but come on. Then why is it marked as 4200+, and not 2200+?

And frequencies on the order of 1 GHz, not just 3, but 8 years ago were already overcome.

The statistical outlier, which is certainly the interpretation of your configuration test, should be treated with great doubt and not try to explain it by the incredible math efficiency of a stone at least 3 years old, which is way ahead of all the latest developments (including Xeon W5590). That simply cannot be the case.

It is much more reasonable to look for inconsistencies in test conditions or posted pictures.

 

Uh-huh. In inconsistencies. The tester, for example, tries to load, if it doesn't have enough, history. But if it doesn't load - no big deal. The optimisation is worked out anyway, but at a sensational speed, as it's not in a year, but in a shorter period. "This is how unhealthy sensations are born" - Vybegallo.

That's why I explicitly stated in the test description: download a minute's worth of history. But who reads instructions? )))

 

Арифметика тут, конечно, какая-то странная, но да ладно. Тогда почему маркировка этого камня - 4200+, а не 2200+?

And frequencies on the order of 1 GHz, not just 3, but 8 years ago, have already been surpassed.

It's really quite simple. All modern processors have no-load power saving technologies. Specifically this processor simply resets its idle frequency from 2.2 to 1.0 GHz. For AMD processors this technology is called Cool'n'Quiet (C'n'Q). Moreover, this technology is indicated by a supply voltage of 1.1V, which is absolutely insufficient for 90nm desktop processors at nominal speed.

By the way, Mathemat, your screenshot of CPU-Z shows the same thing: at idle the processor has reduced clock speed from its nominal 2.53GHz to 1.6GHz and supply voltage to 1.16V. Here too, it's power saving, but with Intel's Enhanced SpeedStep design.

And Mathemat, since you also tabulated the cache size, you should correct a few entries.

Particularly, where CPU-Z shows L2 cache size as 2xXXX KW (for 3800+, 4200+ and Q8200), you should enter it into the table, not 2*XXX as these halves are referred to different cores (or pairs of cores) and cannot be used by the same tester thread simultaneously. Correspondingly, the W5590's effective L2 capacity will be 256 KB. And at the end of the table for the Dmido processor replace 512MB with 512KB.