AI 2023. Meet ChatGPT. - page 62

 

I've been thinking about using this bot for humour.

I saw a modern humour book. It described designs, techniques and moves. And the examples were hilarious.

So, having trained a neural network with all these moves, you can create humour with any characters, situations and places, times.

Sure, a tonne of the generation won't be funny, but there will be masterpieces.

The quality of humour programmes will increase, comedies will become very funny.

The same can be applied to dramas, thrillers and other things, because in directing they teach techniques and moves in scripts, methods of attention retention, NLP and so on.

In the end, the content will raise the bar, but to what heights and what will happen to the person who becomes saturated with all this. It's like a kind of drug addiction. The stronger the emotion, the stronger the addiction.

There will be an urgent need for a "holiday for mankind" from all things digital.

 
Vitaliy Kuznetsov #:

I've been thinking about using this bot for humour.

I saw a modern humour book. It described designs, techniques and moves. And the examples were hilarious.

So, having trained a neural network in all these moves, you can create humour with any characters, situations and places, times.

Sure, a tonne of the generation won't be funny, but there will be masterpieces.

The quality of humour programmes will increase, comedies will become very funny.

The same can be applied to dramas, thrillers and other things, because in directing they teach techniques and moves in scripts, methods of attention retention, NLP and so on.

In the end, the content will raise the bar, but to what heights and what will happen to the person who becomes saturated with all this. It's like a kind of drug addiction. The stronger the emotion, the stronger the addiction.

There will be an urgent need for a "holiday for mankind" from all things digital.


Reminds me of Ostap - and the first intergalactic chess tournament will be held in New Vasyuki for the first time in the history of mankind!!!! ;-)
 

Bing. I have a chat limit of 10 messages, then it writes:

Unfortunately, a limit has been reached for this conversation. Click the broom button to clear this and continue chat ting.

Is this the way it should be?

 
Vitaliy Kuznetsov #:
Imagine : AI Art Generator v2.3.1 b69 [En]

All the subtleties and nuances I will leave without comments

Thank you, the applications are just a horror how many turned out to be made for phones) API yuzayutayut not childish)

Downloaded apkashka pro, kind of broken, later I'll try to put)

 
Meta's powerful AI language model has leaked onto the internet - what's happening now?
Meta’s powerful AI language model has leaked online — what happens now?
Meta’s powerful AI language model has leaked online — what happens now?
  • 2023.03.08
  • www.theverge.com
Two weeks ago, Meta announced its latest AI language model: LLaMA. Though not accessible to the public like OpenAI’s ChatGPT or Microsoft’s Bing, LLaMA is Meta’s contribution to a surge in AI language tech that promises new ways to interact with our computers as well as new dangers. Meta did not release LLaMA as a public chatbot (though the...
 
Lilita Bogachkova #:
Meta's powerful AI language model has leaked onto the internet - what's happening now?
Ok, very interesting. If there was a torrent, I'd download it :)

But there are questions and doubts:

1. A raw model with 13V parameters can be compared to a model with 175V parameters after finetuning? Really? Seems like an exaggeration.

2. What resources are required for filetuning?

It's one thing to use the model, another to tune it. The resources for operating are available to almost anyone, according to the article, and the model can be run locally, but training or pre-training is another. It's not clear how accessible this process is to the general public. However, maybe a local model can be pre-trained on a server for a fee?

3. What resources are required for stable operation?

If almost everyone can run (not configure, but only run) a local analogue of ChatGPT, then the OpenAI and Microsoft business model, with server-based binding of users and subscription fees, will inevitably collapse. The emergence of local versions of developed and profile-aware conversational AI will bury their current plans. Have they not calculated the risks? They would have sold customised models immediately to a wide range of businesses....

Perhaps a shift from server-based AI to local AI is inevitable, but how realistic is it with current technologies?
 

Реter Konow #:
Ок, очень интересно.

...

But there are questions and doubts:

I decided to get a better understanding of the article as the information seemed contradictory. Something doesn't add up. "A leaked AI model could mean a lot of things... or almost nothing. or nothing at all.

Let's break down the article:

1. Meta announced a large LLaMA language model, and this is the company's contribution to support the language AI boom that has begun.


2. Unlike OpenAI and Microsoft, Meta did not make LLaMA a public Facebook chatbot, although Zuckerberg and co. are working on such a thing too, instead, Meta made the model an open-source model and anyone in the AI community (not sure who they mean) can request access to it and participate in beta testing and troubleshooting.

That is, Meta's goal is to distribute the raw model to advanced users for feedback and help with customisation. (But their main goal is of course highly moral: DEMOCRATISATION OF ACCESS TO INNOVATION).

Here's a free paraphrase of a quote from their blog post on the subject:"despite recent significant advances in language models, research access to them remains limited due to the high system requirements for training and running these models. This limits researchers' ability to understand how these models work and help us address their challenges, such as: bias, toxicity, and misinformation."

Ok. that's understandable. Moving on....


3. Just a week after LLaMA was made available to the general public (for testing), on the 3rd of March, the model was "leaked" online on 4chan (a western media resource), where a torrent for downloading it appeared (I wonder if it is there now?). Since then, the torrent has spread to various AI communities (probably like ours in this thread. :)) and debates have started on how best to share research data (that's what real scientists are all about!).


4. I will omit details, insignificant in the context of the topic, about different opinions and conflicting judgements about the consequences of unregulated access of the masses to advanced LLMs. We have heard and read all of this before in various articles and comments. We are interested in the leaked model and the benefits to be gained from it. (can it be downloaded, customised and sold? :)). Most importantly, we are interested in the trend of introducing advanced conversational AI into all spheres of life..... how exactly will this process be carried out...? and it is desirable to know it before others. So, let's look for answers to these questions in the article.

The Verge's source spoke to several researchers who downloaded the "leaked" model and they said it was"legitimate", i.e. legitimate, lawful (in this context, probably worthy). Specifically one of them, one "Matthew Di Ferrante", allegedly compared the downloaded model with the available original (LLaMA) and concluded that they are identical. Joelle Pineau, director of Meta AI, said that while access to LLaMA remains relatively closed (semi-open), someone has decided to circumvent the approval given to verified users.....


5. LLaMA is powerful AI - if you've got the time, expertise, and right hardware.

That is, LLaMA is powerful AI - if you've got the time, expertise, and the right hardware. I would highlight the last point separately. What do you mean by "right hardware"? What kind of hardware? What kind of hardware do you need to have to run a "counterfeit" LLaMA? I mean, is there any hope to download pseudo-LLaMA and make a personal AI out of it? Vaughn, miners were building almost supercomputers in their communal rooms.... They had the competence and found the equipment. So, is it possible?

The answer to this question in the article is as follows: a simple download of LLaMA will do little for the average user. It is not a ready-made, customised chatbot, but a raw AI system that requires technical knowledge, experience and professionalism to enable. And of course, significant computing resources. That is, not that it is impossible to set it up and switch it on, but it is very, very difficult for a simple user.... For a more advanced one, respectively, it is easier, for an even more advanced one, even easier, and so on.... So, it is possible. If you really want to do it, you can do it....

Here is a direct quote from Matthew Di Ferrante: "anyone familiar with setting up servers and dev environments for complex projects" should be able to get LLaMA operational "given enough time and proper instructions" . That is, anyone familiar with setting up servers and dev environments for complex projects should be able to get LLaMA operational given enough time and proper instructions.

Well, let everyone think for themselves whether they can run LLaMA.

*(Though it's worth noting that Di Ferrante is also an experienced machine learning engineer with access to a "machine learning workstation that has 4 24GB GPUs" and so is not representative of the broader population.)


6. LLaMA is a "raw" model that requires a lot of work to get operational.

The bottom line is that LLaMA is a "raw" model that requires a lot of work to become operational. In addition to the technical and competence barriers facing enthusiasts, the next barrier is that LLaMA is not fine-tuned and in its raw state cannot be used like ChatGPT or Bing. Tuning(Fine-tuning), is the process in which the text generation capability is focused on specific tasks. These tasks can come from a wide variety of possible applications and are done so that the model can respond to user queries in as correct and clear a form as possible.

Stella Biderman, director of the non-profit AI lab "EleutherAI" says the following: "Most people don't own the hardware required to run [the largest version of LLaMA] at all, let alone efficiently.", i.e. "Most peopledon't own the hardware required to even run (meaning the largest version of LLaMA), let alone make it work efficiently.".

Ok, what about the smaller LLaMA models?

And here's the answer: the LLaMA is available in 4 sizes: 7B, 13B, 30B, 65B (B - Billion. meaning parameters). Meta claims that the 13B version can run on a single A100 GPU, a business level graphics card relatively affordable to the masses. It can be run on cloud platforms for a few dollars an hour and this system outperforms the 175 billion GPT-3 on multiple benchmarks. But, the article says, that's debatable and not certain.) Some of the advanced users said they've already encountered problematic output from this system, others say it's a matter of skill and ability to tune the model. In general, it's not clear, but there is an assumption that with proper and professional customisation, the capabilities provided by LLaMA will reach the level of ChatGPT.


Anyway, I don't know, decide for yourself).


The article goes on to talk about the impact of AI on our lives and the consequences of misinformation and so on.... I think that the point we are interested in has been stated here quite well.

 

Something incredible is going on in the AI world lately.

The latest news:

1. Possible multimodal GPT-4 coming in a week and new video processing capabilities: GPT-4 is Coming Next Week? Plus More Insane AI Tools! - YouTube

2. The imminent arrival of the ultra realistic Midjorney 5: (13) MidJourney V5 Looks Insane! - YouTube

GPT-4 is Coming Next Week? Plus More Insane AI Tools!
GPT-4 is Coming Next Week? Plus More Insane AI Tools!
  • 2023.03.10
  • www.youtube.com
In this video I explore some of the crazy advancements that have happened this week in the world of AI. I break down a lot of things in this one so sit back ...
 
Реter Konow #:

Something incredible has been going on in the AI world lately.

And on Twitch there are already online broadcasts of series written by AI and translated into video by AI. It's still a little bit crude, but in a couple of years it'll be a masterpiece.

Download LOST series, delete the last few episodes and ask to adequately finish the series, revealing all the secrets shown earlier. Profit

Or ask AI to redo the ending of Game of Thrones according to the book. Etc.

Over-saturation of such content will lead to an amoebic existence. The real world will stop bringing emotion.

 
Vitaliy Kuznetsov #:

And on Twitch, AI-written shows are streamed online and translated into video by AI. It's still a mess, but in a couple of years it'll be a masterpiece.

Download the series LOST, delete the last few episodes and ask to adequately complete the series, revealing all the secrets shown earlier. Profit

If you can provide a link. Interesting.

Based on recent news, AI can process video and replace people in it with fictional characters (repeating the movements of the person in the frame), but to create entire episodes of TV series.... I haven't heard of that yet.