Global recession over the end of Moore's Law - page 5

 
СанСаныч Фоменко:


Moore is just advertising his brainchild and doesn't understand anything about the progress of science at all, he must be.

Moore doesn't care, he's 87 years old. Besides, he didn't invent the law, he just noticed that the number of transistors doubles every two years. The law is upheld by competing microprocessor manufacturers in an effort to sell new circuits every year with increased performance. Incidentally, in 2005, the pursuit of higher processor frequency was replaced by the pursuit of multi-core. Today, with transistor size decreasing from generation to generation, no one aims to make this transistor faster and more powerful because an increase in the concentration of transistors on a chip chip chip leads to heat dissipation problems. Therefore the trick is to reduce the power consumption of the transistor so that the total transistor power remains the same as the number of transistors increases. So when you double the number of transistors every 2 years according to Moore's law everyone expects the power consumption to drop by 50%. All circuit manufacturers are trying to achieve this goal to keep up with the competition. But in 2020, silicon technology will reach its end by virtue of reaching atomic size transistors, and new technology is still not available, not even on the horizon.
 
Well, it's commonplace. There are many examples in the world, the internal combustion engine, for example - you cannot constantly increase the compression ratio to increase efficiency (the more you increase it, the less efficient it is) and make it more high-speed. And here they have stalled, and serious breakthroughs are unlikely. And before, you could derive some comrade's law, according to which the efficiency of the engine was rapidly increasing.

Piston aviation died in a similar way - it has reached its ceiling, such an engine has constant power. A jet engine with constant thrust is another matter.

Well, it's the same with processors, I guess. I think the answer is in the new materials. The voltage drop on a germanium diode is less than on a silicon diode (in that vein, I don't mean to replace krmium with germanium).

And anyway, what does today's"world economy" spend its increased hardware capabilities on? On releasing a new version of Windows? )) (win10 eats more than xp).
 
pavlick_:
Well, it's commonplace. There are many examples in the world, the internal combustion engine, for example - you cannot constantly increase the compression ratio to increase efficiency (the more you increase it, the less efficient it is) and make it more high-speed. And here they have stalled, and serious breakthroughs are unlikely. And before, you could derive some comrade's law, according to which the efficiency of the engine was rapidly increasing.

Piston aviation died in a similar way - it has reached its ceiling, such an engine has constant power. A jet engine with constant thrust is another matter.

Well, it's the same with processors, I guess. I think the answer is in the new materials. The voltage drop on a germanium diode is less than on a silicon diode (in that vein, I don't mean to replace krmium with germanium).

And anyway, what does today's"world economy" spend its increased hardware capabilities on? On releasing a new version of Windows? )) (win10 eats more than xp.)
Visually from observation less.
 
Vladimir:
... and the new technology has not appeared, not even on the horizon.

What is this new technology for?

Is it for PR, to encourage exalted people to replace the computer?

I suggested above to compare BESM-4 with modern computers in terms of the number of floating point operations. No one can. Although floating point is a better measure of progress on computational tasks.

From my perspective as a user with very limited knowledge of programming techniques, I don't need gigahertz at all. I need to be sure that, without making special individual efforts, I'll get a code which is as efficient as possible at a given stage of development. And gigahertz is clearly not first place here.

I'll put the following circumstance in the first place. If my application algorithm involves some computationally complex algorithm, e.g. optimization or operations with matrices, then I will AUTOMATICALLY use the most efficient code possible, both in terms of writing technique and hardware usage, e.g. using much threading. And if the last 15 years gigahertz have increased within 100%, with my approach automatic, e.g. using MKL library for matrices, leads to an increase in performance by orders of magnitude.

The moral of my words: PC performance can't be measured by gigahertz, the performance should be measured by the speed of execution of applied tasks.

 
Vladimir:

The first GPU with a significant number of cores appeared in 2007 (Nvidia, 128 cores). Today, the number of cores has reached 3,000. Did you experience a 128x speedup of your computer in 2007 versus 2006? How about 3000x acceleration today compared to 2006? There isn't one. Cores continue to be used in graphics where paralleling is easy. In 2009-2010 I tried to program something on a GPU with 256 cores myself. I immediately found out why the software mostly doesn't use multi-core - it is very difficult, the developer has to manually decide which parts of the program can be paralleled. Well, I did finish the new code, it was 3 times faster with 256 cores. But I was even pleased with the 3 times acceleration. But when I had to create the next code I recalled my agony and stopped parallelizing anymore. Of course, separate problems such as graphics and database handling will continue to use multi-core, but other programs will only be helped by a new compiler which automatically finds places in the program which can be paralleled, but no such compiler exists as far as I know.

I don't deny the potential of improving the software and making computers faster on this basis. I argue that the purchase of new computers and smartphones will fall in 2021-2022 due to the stalling of hardware improvements. Will you buy a new computer if it has the same cores, memory and frequency as your old computer? Probably not, you will buy new software. All hardware manufacturers and related industries will go into recession, with mass unemployment.

Can you recommend a CPU with 256 cores?
 
The moral of my words: PC performance can't be measured by gigahertz, it should be measured by the speed of execution of application tasks. <br / translate="no">.
Definitely, if only because there is such a thing as https://ru.wikipedia.org/wiki/%D0%92%D1%8B%D1%87%D0%B8%D1%81%D0%BB%D0%B8%D1%82%D0%B5%D0%BB%D1%8C%D0%BD%D1%8B%D0%B9_%D0%BA%D0%BE%D0%BD%D0%B2%D0%B5%D0%B9%D0%B5%D1%80
Вычислительный конвейер — Википедия
Вычислительный конвейер — Википедия
  • ru.wikipedia.org
Конве́йер — способ организации вычислений, используемый в современных процессорах и контроллерах с целью повышения их производительности (увеличения числа инструкций, выполняемых в единицу времени), технология, используемая при разработке компьютеров и других цифровых электронных устройств. Идея заключается в параллельном выполнении нескольких...
 
Alexey Busygin:
I read somewhere that an American institute created a processor with 1000 cores, but no interaction with memory.
It was, it was, I also remember a note about a 1440-core graphene pebble.
 
Vladimir:
The number of cores will also stop growing since running several cores in parallel consumes more power than one core. Decreasing the transistor size led not only to an increase in the number of cores but also to a decrease in transistor power which eventually allowed the total power to be kept at approximately the same level.

Vladimir, let's remember the good old ESka principle(ES_computers).

Now we see the same thing (in principle) handheld terminals and connection to a cloud service with essentially limitless computing capabilities.

So in the future there will be computational farms that will be renting resources to users and the desktops and handhelds will only be terminals communicating with the mainframe.

Conclusion: Intel will be busiest to the hilt since even those processors which are available will be needed more and more and I don't foresee any stagnation in this industry (imho).

ЕС ЭВМ — Википедия
ЕС ЭВМ — Википедия
  • ru.wikipedia.org
ЕС ЭВМ (Единая система электронных вычислительных машин, произносится «еэ́с эвээ́м») — советская серия компьютеров. Аналоги серий System/360 и System/370 фирмы IBM, выпускавшихся в США c 1964 года. Программно и аппаратно (аппаратно — только на уровне интерфейса внешних устройств) совместимы со своими американскими прообразами. В середине 1960-х...
 
Nikolay Demko:

Vladimir, let us remember the good old principle of the ECU(EC_computer).

Now we see the same thing (in principle) handheld terminals and connection to a cloud service with essentially limitless computing capabilities.

So in the future there will be computational farms that will be renting resources to users and the desktops and handhelds will only be terminals communicating with the mainframe.

Consequently Intel will be busy as much as possible since even the existing processors will be needed more and more and I don't foresee any stagnation in this industry (imho).

About farms you're rather right, they are becoming more and more popular
 

And this was actually the OS on the EU.

A virtual machine system

Система виртуальных машин — Википедия
Система виртуальных машин — Википедия
  • ru.wikipedia.org
Разработчик Семейство ОС Тип ядра Лицензия Состояние Экран файлового менеджера FLIST в ПДО СВМ, получен на эмуляторе «ЕСли» в системе «Букет». X — введённая команда вызова редактора XEDIT для соответствующего файла СВМ (VM, и её ранняя версия CP/CMS) — первая система, в которой была реализована технология виртуальных машин. Виртуализация...