Machine learning in trading: theory, models, practice and algo-trading - page 3296

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
and I wrote to you that it's just your hypothesis "as the traits increase, the results will get worse".
I stated my hypothesis. no one here in MO has tried to test it yet because of the cost of experiments. but I remind you, some people did it for GPT, there was a jump in the quality of connections of heterogeneous information to such an extent that it became possible to create new connections and conclusions.
I wrote to you that this is your hypothesis, I didn't hypothesise. It's strictly proven.
What you said is NOT proven, it is an empirical judgement, therefore your statement is a hypothesis.
I had no questions for you.
what you said is NOT proven, they are empirical judgements, so your statement is a hypothesis.
I had no questions for you.
What finger to finger?
Exactly, large language models are trained exactly the same way, and they use optimisation algorithms (you can ask GPT what algorithms he was trained with, a few months ago he answered unambiguously, now he's humouring, but I'll just say that ADAM is one of them). and I have no idea what kind of learning error there is, just like you have no idea. The authors are good just because they were able to build evaluation criteria for a large model, which is very difficult, because it is not enough to collect information, you need to be able to correctly evaluate the quality of training (as I said, building evaluation criteria is not less important).
You like to measure yourself. I'm not teaching you, you should understand these things if you think you're a super-professional.
You like to measure yourself. I'm not teaching you, you should understand these things if you think you're a super pro.
You're the one who thinks I'm a super pro and writes off-topic. I don't like blabbering, a mush of unrelated arguments, sprinkled with psychological tricks, like references to authorities. If you're too lazy to read the evidence, I can't help you any further.
You cited GPT as some kind of proof of no idea what. You're just writing for the sake of writing. There is no meaningful message. I'm not interested in optimisation, that's the 3rd question. I did not write about optimisation and did not ask about it. If learning includes optimisation, it doesn't mean that learning is optimisation. That's not what the conversation was about at all.