AI 2023. Meet ChatGPT. - page 52

 

In chrome extensions there is chatgpt which works sideways from the search engine. I checked it on Yandex. But here's the problem, I couldn't register, because there VPN didn't help, you need to specify your phone number, and RF didn't fit.

Can anyone throw instructions on how to register from the RF or can someone make a novorega and throw in a private Login/Password?

 
Vitaliy Kuznetsov #:

In chrome extensions there is chatgpt which works sideways from the search engine. I checked it on Yandex. But here's the problem, I couldn't register, because VPN didn't help there, it required to specify the phone number, and RF didn't fit.

Can anyone throw instructions on how to register from the RF or can someone make a novorega and throw in a private Login/Password?

Hi!

There is a Russian site - onlinesim.ru there you can rent a foreign number for 15 rubles per SMS.

And you can register on OpenAI.

 

and this is what ChatGPT has replied, you should write them an email.

 
Chatgpt is an ordinary neural network with context. Unless it is taught something on purpose, it does not show any visible intellectual signs. But even then, if it takes a step left-right, that's it, again it doesn't look like any intelligence.

If she writes a plausible text, it's not intelligence, it's just copying and rearranging what she's learnt from. In other words, there's nothing under the bonnet there that could lead her to the rudiments of intelligence.

Everything there from the get-go is human language and mimicking it like a monkey, sometimes with context pulled out thanks to the fact that any language has structure

And that's where these network architectures will be stuck for a long time until they come up with other toys.
 
Maxim Dmitrievsky Chatgpt is an ordinary neural network with context. Unless it is taught something on purpose, it does not show any visible intellectual signs. But even then, if it takes a step to the left or right, that's it, again it doesn't look like any intelligence.

If she writes a plausible text, it's not intelligence, it's just copying and rearranging what she's learnt from. In other words, there's nothing under the bonnet there that could lead her to the rudiments of intelligence.

Everything there from the get-go is human language and mimicking it like a monkey, sometimes with context pulled out thanks to the fact that any language has structure

And that's where these network architectures will be stuck for a long time until they come up with other toys.
yes - by the way, I agree - because of how over-intelligent :-) this toy is already... :-)
 
Реter Konow #:

Having reread the "lecture", I regretfully realised that the work was in vain and the text does not convey the professor's actual speech. Everything is distorted beyond recognition.

Tomorrow I will try to write my summary of the content of the webinar.


Here came a notification from the Higher School of Economics this is not this speech? from YouTube in English.

It's 37 minutes long. Not 1 hour. Maybe they cut it down....

 
Roman Shiredchenko #:

There's an alert from HSE. Isn't that the speech? From YouTube in English.

h ttps:// youtu.be/NmdSWfPi0AU
37 minutes. Not 1 hour. Maybe they cut it....

No, it's a different video.

Here is the link: (2) Stanford Webinar - GPT-3 & Beyond - YouTube

Stanford Webinar - GPT-3 & Beyond
Stanford Webinar - GPT-3 & Beyond
  • 2023.01.31
  • www.youtube.com
GPT3 & Beyond: Key concepts and open questions in a golden age for natural language understandingListen in as Professor Christopher Potts discusses the signi...
 

And so, a loose summary of Christopher Potts' webinar (Link to the video in the post above)

Did what I could. Of course, it's better to watch it in the original. Some points not mentioned because I don't understand the ML topic enough yet.

...

  1. We live in a golden era of natural language processing technology development. Back in 2012 it was impossible to imagine the incredible advances we have today in the field of ML and NLP in particular.
  2. The social significance and impact of AI technologies on our lives is growing and continues to intensify. It is worth mentioning the new possibilities that new models such as DALL-E, Midjorney, Stable Diffusion created for image generation and Codex-based Copilot created for programme code generation. Also, Y.com, an advanced internet search engine, Whisper from OpenAI that translates human speech, the giant GPT3 and other opensource models like OPT, BLOOM, GPT-NEOX, etc.
  3. From these models, we can see the tremendous progress of AI technologies in the last 30 years. If we chart the development of machine learning since the 90s, we see an exponentially rising curve. The development of MNIST digit recognition system in the last 20 years has reached the human level. Also, the Switchboard system recognises human speech as well as a human.
  4. Of course, one of the factors driving the progress of large language models (LLM - Large Language Model) is their growing size. In 2018, the largest language model had 100 million parameters, and today Google has created a PaLM model exceeding 500 billion parameters (*the Chinese have a model of 1.5 trillion parameters, if ChatGPT is to be believed.).
  5. Modern language models confidently passed stress testing by tests such as Glue and SuperGlue, specially designed for this purpose. This result was unattainable for smaller models. This fact clearly shows a direct correlation between problem solving ability and model size. However, it is also important to mention a significant paradigm shift in NLP, which is described in detail in the GPT3 article. In addition, the Transformer technology and the"attention mechanism", well described by its authors in the famous "Attention is all you need" article by Google deep learning researchers, were breakthroughs. (*by the way, the ideas of this article were adopted by OpenAI and implemented in their first GPT models.). This was a step up from the LSTM mechanisms used before the Transformer era.
  6. The next innovation was Self-Control. A fundamental mechanism by which the model learns to assign high probability to attested sequences (*sounds nice, but I don't know what it means.).
  7. Today, we are witnessing LLMs revolutionising web search and this again, thanks to Tranformer technology. It started when Google incorporated aspects of their BERT language model into their search algorithm. Microsoft followed and made LLM an important component of the process of searching the web for information and presenting the results to the user. We are now seeing the introduction of the new concept of "Search powered by AI" - i.e. searching for information together with an interactive dialogue "agent".
  8. It is important to mention that one of the factors behind the explosive growth in the size of language models is their dual function. They simultaneously serve as a repository of knowledge and language capabilities. If we can separate these two things, we can reduce the size of these models. (*Surprisingly, I came to the same conclusion even before watching the professor's lecture, although he definitely articulated the idea much better).
  9. The main problem with using LLMs remains reliability. There is the question of "how much can they be trusted?". There is also the issue of effectiveness - does the benefit they provide justify the resources consumed? There is the problem of "bias" of language models (Bias issue). Racism, extremism, and other negative aspects of human communication included in the training data set can "leak" into them. It is impossible to get rid of this completely.
  10. It is worth thinking about the impact of AI on the environment in terms of electricity consumption, the generation of which causes harmful emissions into the atmosphere. Also, it is worth mentioning the need to create a model that translates images into text descriptions, which would be very useful for people with vision problems.

...

I didn't mention a lot of things and told some of them in my own way, but I tried to keep the essence. I hope I did better than ChatGPT. :)

There were some fragments in the lecture where the deep nuances of solutions, approaches and methods used in modern NLP were discussed, which I omitted. They require knowledge and understanding of the matrix, but the general sense of the mentioned technologies, even for me, a nub in ML, seemed accessible and the lecture was useful. Many things became clearer.

 
Vitaliy Kuznetsov #:

In chrome extensions there is chatgpt which works sideways from the search engine. I checked it on Yandex. But here's the problem, I couldn't register, because VPN didn't help there, it required to specify the phone number, and RF didn't fit.

Can anyone throw instructions on how to register from the RF or can someone make a novorega and throw in a private Login/Password?

Look in private.

 

Look what I found:

import openai
openai.api_key = "YOUR_API_KEY"

prompt = "What is the meaning of life?"
model = "text-davinci-002"
response = openai.Completion.create(engine=model, prompt=prompt, max_tokens=50)

print(response.choices[0].text)