Machine learning in trading: theory, models, practice and algo-trading - page 3667

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
With model training?
Optimisation. First the model is trained, then the cases that trade well are pulled from it. Like a database.
Giga chat gives inexpensive api, no one has tried bots via queries yet?
I'm also wondering how to make queries, for example on techanalysis.
We need a group of promptus-enthusiasts to test different queries to get predictions.
This can be tried out of hand in the free version.
Then you can just collect the predictions into dataframes for each type of query via api and test them.
Nyce:
Nyce:
He forgot to mention Stochastic directly, or he was shy :)
Man, what a childish tininess this is.
Maxim, I hope you have moved this to
:)
It won't let me send a request via api, who knows about certificates?
Found it.
https://developers.sber.ru/docs/ru/gigachat/certificates
An example of presetting the chat to give the desired output.
It works.
In skilful hands it can turn into an original TC.
An example of presetting the chat to give the desired output.
It works.
In skilful hands it can turn into an original TC.
In 10 years, test and compare. And then the question is whether he peeped?
However...
I think this is just the beginning of a new order.
An example of presetting the chat to give the desired output.
It works.
In skilful hands it can turn into an original TC.
No, it won't, and skilful hands have nothing to do with it.
It's about ideology.
All these GPTs are generalisations of what's available.
We need predictions of the future, which NONE of GPTs can do yet. posted on this thread a relevant video with analysis of gpt and any other used algorithms of "artificial" intelligence.
The ideal of truth mining is metaquote testers: we predict the value/direction of the next bar and collect statistics, and exactly the one given by the tester, not mythical p-squares.
I'm writing this for a reason.
The classification error statistics on the training/testing/validation files, which were obtained by dividing the prepared file in advance, are excellent. And the statistics when moving OUTSIDE that file by a bar forward have nothing to do with the training/testing/validation statistics