Machine learning in trading: theory, models, practice and algo-trading - page 3296

 
Andrey Dik #:

and I wrote to you that it's just your hypothesis "as the traits increase, the results will get worse".

I stated my hypothesis. no one here in MO has tried to test it yet because of the cost of experiments. but I remind you, some people did it for GPT, there was a jump in the quality of connections of heterogeneous information to such an extent that it became possible to create new connections and conclusions.

I wrote to you that this is your hypothesis, I didn't hypothesise. It is rigorously proven.

Don't compare finger to finger, large language models are trained exactly the same way. And you have no idea what the learning error is.

Any other questions?

 
Maxim Dmitrievsky #:
I wrote to you that this is your hypothesis, I didn't hypothesise. It's strictly proven.

Any other questions?

What you said is NOT proven, it is an empirical judgement, therefore your statement is a hypothesis.

I had no questions for you.

 
Andrey Dik #:

what you said is NOT proven, they are empirical judgements, so your statement is a hypothesis.

I had no questions for you.

It is proven. Just because you don't know it's proven doesn't mean it's not proven. Once again, I'm giving a link to baes-variance tradoff.

And don't get me started on what is proven and what is not proven. There are more serious Gurus out there.
 
Maxim Dmitrievsky #:

Don't compare finger to finger, large language models are trained exactly the same way. And you have no idea what kind of learning error there is.

What finger to finger?

Exactly, large language models are trained exactly the same way, and they use optimisation algorithms (you can ask GPT what algorithms he was trained with, a few months ago he answered unambiguously, now he's humouring, but I'll just say that ADAM is one of them). and I have no idea what kind of learning error there is, just like you have no idea. The authors are good just because they were able to build evaluation criteria for a large model, which is very difficult, because it is not enough to collect information, you need to be able to correctly evaluate the quality of training (as I said, building evaluation criteria is not less important).

 
Maxim Dmitrievsky #:

And don't get me started on what is proven and what is not. There are more serious Teachers.

You like to measure yourself. I'm not teaching you, you should understand these things if you think you're a super-professional.

 
Andrey Dik #:

You like to measure yourself. I'm not teaching you, you should understand these things if you think you're a super pro.

You're the one who thinks I'm a super pro and you're off-topic. I don't like blabbering, a mush of unrelated arguments, sprinkled with psychological tricks like citing authorities. If you're too lazy to read the evidence, I can't help you any more.
 
Maxim Dmitrievsky #:
You're the one who thinks I'm a super pro and writes off-topic. I don't like blabbering, a mush of unrelated arguments, sprinkled with psychological tricks, like references to authorities. If you're too lazy to read the evidence, I can't help you any further.

What authorities did I cite? I think you're usually flaunting references to authorities and their articles.
I am not an expert in MO, but I know a lot about optimisation. And since MO is nowhere without optimisation, I can participate in any discussion here without hesitation.
 
Andrey Dik #:

What authorities did I refer to? I think you usually flaunt references to authorities and their articles.
I'm not an expert in MO, but I know a lot about optimisation. And since there is no MO without optimisation, I can participate in any discussion here without hesitation.
You've cited GPT as some kind of proof of something you don't understand. You write for the sake of writing. There is no meaningful message. I'm not interested in optimisation, that's the 3rd question. I did not write about optimisation and did not ask about it. If learning includes optimisation, it doesn't mean that learning is optimisation. That's not what the conversation was about at all.

You just didn't understand what I wrote about and started writing about the sick stuff that is closer to you.
 
Maxim Dmitrievsky #:
You cited GPT as some kind of proof of no idea what. You're just writing for the sake of writing. There is no meaningful message. I'm not interested in optimisation, that's the 3rd question. I did not write about optimisation and did not ask about it. If learning includes optimisation, it doesn't mean that learning is optimisation. That's not what the conversation was about at all.

You didn't write about optimisation, so why are you poking around?
I didn't write to you.
And, learning is a special case of optimisation, remember that.
You see, he didn't write! Huh, so now I have to ask for permission to write about what to write about and what not to write about? Calm down already.
 
Andrey Dik #:

you didn't write about optimisation, why are you poking around then?
I didn't write to you.
And, learning is a special case of optimisation, remember it at last.
Sanych wrote everything correctly. It all started with my message. You went into the wrong débris.

It turned out to be writing about nothing.
Reason: