AI 2023. Meet ChatGPT. - page 112

 
Реter Konow #:
1. Thank you for your reply.

2. And how accurate do you think the AI is in the proportions of salt, sugar, pepper and other spices for dishes it has never tasted?

What's the head for?) The AI can advise you on anything, right?

Ideas are useful. How do you get ideas? - You read a forum, an article, go to an exhibition of paintings, to the theatre, and ideas appear. you ask the gpt for a recipe from available products, and an idea appears in your head.

approximation of information is the main concept in the work of generative networks, in my opinion. by submitting a request to such a network we receive approximated information, it is not something completely new, but different. looking at this "different" ideas arise in a person's head. what technologies have allowed to do something like this before? ideas do not arise out of nowhere, this is how generative networks work, this is how the human brain works, there is nothing bad or good about it, it is just the way it is.

But fantasy is something else, so far available only to humans, I couldn't make Kandinsky depict a horse running on the ceiling (why would I do that? - And why do people do anything at all? interest, curiosity - neither of these things exist in networks and should not be expected, it is just a powerful tool in skilful hands.

 
Andrey Dik #:

..., no expert system of yesterday can even come close to giving the interactivity of working with information that today's graphical and language models provide.

The interactivity of working with information through AI is indeed beyond anything previously achieved.

However, in contrast, the stochastic nature of the language model significantly reduces the level of practical usefulness of the generated information. Further down in the thread, I will prove this.

While in the case of a cook, a statistical error in salt proportion does not lead to fatal consequences, in pharmacology, industrial chemistry, nuclear industry and other fields, such errors are critical.

Errors in doctor's and cookery recipes have fundamentally different meanings for a human, but for an AI it is the same. It will make a mistake in the proportions of chemical ingredients of a medicine as easily as in the proportions of ingredients of a dish.

Such a system, in principle, is not capable of NOT making mistakes, because of its statistical, probabilistic nature. This deadlock can be overcome by connecting profile programmes and checking the results of generation through them. Or, generating the answer through precise programmes. In this case, LLM turns into a semantic speech multiprogramme interface. This is what will happen in the near future. But, the development of AI will stagnate at this point.

 
Andrey Dik #:

1. already subscribed. the subscription costs disproportionately less than the benefit.

...

Aren't you confused by the fact that AI doesn't compile its code before giving it away?

And what's the benefit, to be precise - is it improving code that was already good, or making bad code okay?

 
Andrey Dik #:

What's the head for?) The AI can advise anything, right?

...

What am I saying?)

 
Реter Konow #:

The interactivity of working with information through AI is indeed beyond anything previously achieved.

However, in contrast, the stochastic nature of the language model significantly reduces the level of practical usefulness of the generated information. Further on in the branch, I will prove it.

While in the case of a cook, a statistical error in the proportion of salt does not lead to fatal consequences, in pharmacology, industrial chemistry, nuclear industry and other fields, such errors are critical.

Errors in doctor's and cookery recipes have fundamentally different meanings for humans, but for AI they are the same. It will make a mistake in the proportions of chemical ingredients of a medicine as easily as in the proportions of ingredients of a dish.

Such a system, in principle, is not capable of NOT making mistakes, because of its statistical, probabilistic nature. This deadlock can be overcome by connecting profile programmes and checking the results of generation through them. Or, generating the answer through precise programmes. In this case, LLM turns into a semantic speech multiprogramme interface. This is what will happen in the near future. But, the development of AI will stagnate at this point.

you know, I have no desire to spend the time saved by AI on proving that networks are useless.)))))
Retag Konow #:

Aren't you confused by the fact that AI doesn't compile its code before outputting it?

And what's the benefit, to be more precise - is it improving code that was already good, or making bad code okay?

I don't care that it doesn't compile. I need the logic of the programme, and gpt helps me with that perfectly. in most cases the same piece of code can be written differently, and I wouldn't trust the AI to check the code for execution, it's enough for me to save time on building the logic of the programme.

What is the benefit of making bad code good?!)))) are you serious?

Retag Konow #:

What am I saying?)))))

I don't know what you're talking about, but my point is that you need a head to use a hammer, and without a hammer, nails are slower to hammer in.

 

Ilya Sutskever, chief developer and co-founder of OpenAI. Aperson, on whose vision depends the development and implementation of GPT-4.

I suggest to listen to his interview and then discuss what he said.



Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Spies, Microsoft, & Enlightenment
Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Spies, Microsoft, & Enlightenment
  • 2023.03.27
  • www.youtube.com
Asked Ilya Sutskever (Chief Scientist of OpenAI) about - time to AGI- leaks and spies- what's after generative models- post AGI futures- working with MSFT an...
 
Реter Konow #:

Do you understand the difference between a wheel, a cart, a wagon and an automobile?

These are stages of technological development that are aimed at exactly the same thing - accelerated work. Accelerated work in turn generates exponential growth of further technological development. It is like a terminator returning to the past to hand over a super-duper microchip so that mankind would not spend decades on its invention. There it is, ready as a cucumber, with just a corner broken off. Go ahead, Cyberdyne, speed up the end of mankind.

Any mathematical gadget that helps speed up calculations or increase the efficiency of problem solving is one small revolution. Error back propagation is one small technological revolution. Training a neural network to solve a task and freeing a human being from labour is one small technological revolution.

Any effective measure cannot a priori be useless or useless. The creation of the language model is an extension of technological development, where typing T9 once freed up our time and energy to write long words. Now Chat, which writes texts of statements of claim, freeing citizens from wasting money and time on using the services of lawyers (good full-time or once bad part-time, as luck would have it), explains some laws and regulations, to assistant judges it writes sample judgements, freeing them time only for checking and editing, even judges themselves can use Chat. This is only one small part of what Chat 3.5 can do. What can Chat 4 do, and what can Chat 5 do with trillions of parameters? And what can a specialised chatbot do when it is obtusely trained on a particular legal system of one country? Considering how "raw" 3.5 solves the problem of the value of two lives instead of one, the chat can not only express the technical part, dealing with the division of jointly acquired property, but also solve more complex problems, if we go in this direction. What can 4 do? What can 5 do? And what will chat 6 be able to do?
This is an example from one area, there are plenty of these areas.

Not to understand the "revolutionary" nature of the language model, and in principle of today's AI, which you call here essentially a "hammer" in human hands - a lifeless, useless, hype tool - is to be blindsided, and confused in your own judgement.

It's one thing to look for applications of networks, as Vitaly and Andrew do above, it's another to give a low opinion of a potential thing that will put you out of a job. Not even in a bad sense - but will "free" you from work, here it already depends on the policy of a particular country, how it will "distribute the benefits" of civilisation in the future.

Now 3.5 is a germ. You call a fetus a dumb thing that can't solve some complex maths problem. So it hasn't been trained to solve it. And you ask version 4, and then you ask version 5 the same thing. Or the specialised version that will appear (expectedly) in the future. I guess they will make YOU a "fetus", stick your nose in your own logical errors in reasoning, but they will not laugh, because Sam Altman forbade to be bad, and the chat rooms will kindly inform you about it.

Now there is only one question - how to apply it all more effectively, where to stick it, how to stick it, at what angle. The field is unploughed for another exponential technological leap. The question of revolutionary is closed, this thing will change our lives. Anything that changes humanity's life is revolutionary. This is already an obvious fact.

You more often go into demagogy and nihilism, it often happens when you deal with abstract concepts, in particular "consciousness", which you and others used to operate with earlier. You should think more practically, logic is not about discussions, it is not sophistry, it is about relations of phenomena, objects and subjects.

 
Ivan Butko #:

...

I read your post carefully. Thank you for your opinion.

 
Ivan Butko #:

...

I love it.)

 
Ivan Butko #:

To not understand the "revolutionary" nature of the language model, and in principle of today's AI, which you here call essentially a "hammer" in human hands - a lifeless, useless, hype tool - is to be blindsided, and confused in your own judgement.

What is fundamentally likely to change? AI generates pictures? The whole internet was full of pictures long before AI. Generates texts, articles? - the whole Internet is already full of the product of copywriters and other "writers". Video content will become even more perverse? Adverts even more impressive? Hmmm... Against the background of how society has already got used to all this flow of information, changing the pressure of this flow does not fundamentally change anything. Yes, there will be even more bots on social networks and dating sites that will very subtly simulate the behaviour of real people. Will scammers thrive even more with their fakes? Fake product reviews, fake subscribers, fake everything. And society reacts to it how? - the perceived value of this content is already plummeting today, and will simply turn to mud from here on out.

You say AI will be cool to solve some problems? It's already being used wherever it's needed, in the form of specialised solutions. For example, deciphering the human genome with all the details started long before this speech trinket.

Semantic search of information on the Internet is good, but again, will the technology help to rid users of paid advertising, censorship and propaganda? Do you seriously believe that?