AI 2023. Meet ChatGPT. - page 200

 
Реter Konow #:

I'll add a thought in the context of the topic of reproduction:

  • Controlling the production process at the machine level is one thing. We already have it, although not to the full extent.
  • Production management at the factory level is fundamentally different. I don't think this has been realised yet.
  • Production management at the level of multiple enterprises is an unattainable level. Why? - The amount of data received and analysed is both too large and insufficient at the same time.
  • Controlling fully autonomous self-reproduction of robots is an unrealisable idea. There are too many parameters to optimise. And at the same time, too few (in reality, there are many more anyway).

The more conditions, the more errors, the first now can work under human control, the rest is impossible because AI errors will be exponentially progressive.

 

Right now I'm drinking dried fruit compote and it gives me optimism for tomorrow, even though it has apricot and dried cherries in it. Can the AI understand that ?

P.S. or I want to booze till morning alone and look at the sky.

The AI doesn't need to get close to me or it will overheat the CPU).

 
Volodymyr Zubov #:

The more conditions the more errors, the first now can work under human control, the rest is impossible because AI errors will be exponentially progressive.

I agree. AI will only be able to provide very general, superficial control. The amount of data that needs to be reacted to, weighed, categorised and ranked by importance is too large. It will have to prioritise. As a result, it will start missing many important events, problems. His control will become weaker. The efficiency of management will decrease. Everything will fall to the robots working on the ground. And they don't have much processing power.

In general, it's likely that self-reproduction will quickly become a serious problem.

I'd like to find a way to prove it mathematically.....

 
Реter Konow #:

I agree. AI will only be able to provide very general, superficial control. The amount of data to react to, weigh, categorise and rank by importance is too large. It will have to prioritise. As a result, it will start missing many important events, problems. His control will become weaker. The efficiency of management will decrease. Everything will fall to the robots working on the ground. And they don't have much processing power.

In general, it is likely that self-reproduction will quickly become a serious problem.

I'd like to find a way to prove this mathematically.....

I studied VT (computer science) but we were taught from manufacturing and building transistors from physics to manufacturing, then on transistors to logic chains from them to microprocessors. AI won't do that.

 
Volodymyr Zubov #:

I studied VT (Computer Science) but we were taught from manufacturing and building transistors from physics to manufacturing, then on transistors to logic chains from them to microprocessors. AI won't do that.

Modern AI won't. But we have here a mythical and omnipotent AI that can do everything.)))))

 
Реter Konow #:

Modern no. But here we have a mythical and omnipotent AI, which can do everything.)))))

Exactly)))))

 
Volodymyr Zubov #:

Right now I'm drinking dried fruit compote and it gives me optimism for tomorrow, even though it has apricot and dried cherries in it. Can the AI understand that?

P.S. or I want to booze till morning alone and look at the sky.

The AI doesn't need to get close to me or it will overheat the CPU).

He won't understand, but he'll sympathise.

 
It doesn't work for LLM to bring out the negatives. I wonder how they filter such content. The one case where the "AI" is so right, it's infuriating :)
 
Maxim Dmitrievsky #:
It doesn't work for LLM to bring out the negatives. I wonder how they filter such content. The one case where the "AI" is so right, it's infuriating :)

SiRi on the iPhone will say the same thing.

 
I even chiselled Siri, she keeps me company, but it's not the same).