Machine learning in trading: theory, models, practice and algo-trading - page 1123

 
Mihail Marchukajtes:

Well, let's talk.....

Can you reduce your AI to a single neuron and run my data? Of course, respect to Sanych, but I could do statistical analysis myself in Rattle, and I could spin the nets there... Not interesting :-(

I have not waited for an answer so far, and frankly, I do not wait any longer.......

And who and what prevents you from doing as you see fit? You train one neuron, calling it AI, of course, it's funny, but if it works, then why not? By the way, I'm not a big supporter of complex solutions, but I think fiddling with one or two neurons is unlikely to be interesting. That is why I do not see any enthusiasts who would like to check anything.

By the way, Maxim already spun Reshetov's neuron about a year and a half ago, and he had a lot of grails flying out. But he gave it up a long time ago. Probably there are reasons for that).

 
Maxim Dmitrievsky:

1 neuron does not drag, did a lot of neurons, but the optimizer begins to choke because of the number of weights and the cloud eats a lot of resources

After that I made a neuron from fuzzy logic (fuzzy output), which has only 2 weights, or even one. Combination of these fuzzy neurons optimizes many times faster, but it's a lot of writing and optimizer is not flexible

so we went the other way

It's clear that 1 doesn't and can't drag.

Well, they are all NS as-is, and there is a fuzzy logic.

My NSs take a long time to learn - about a day, in total. I think that's normal. They work fast, even on slow software (or rather, the shell) - no performance problems.

What is the other way?

 
no:

Unfortunately I am not a master of eloquence and formal proofs, in fact I will think about it, how to justify it formally, just to clarify myself, I do not remember exactly even when and from whom I learned it or in general it was self-evident, but if just at the level of "common sense" then the future (forward) is not available to us, should not be at the stage of learning in any form, we learn the algorithm by the past, that is, the test in ML and forward in optimization in general should not see the future. Talking about "independence" of samples obtained with window filters is naive, think about it, for example each "independent" point has a number of moments with different windows as features and momentum shifted to the future as a target, and the next point already contains a target in the features)))) Nah... it's just another way of peeking again.

no, not convinced ))

i can still accept that if the momentum window is too big then maybe it will somehow affect the forward backwards, although... no

 
Maxim Dmitrievsky:

Peeping should still only appear on a tray as a small error, but not on the test

Not relevant to the forest - with the forest you can achieve a negligible error at all on any tray even without peeping. As if it's designed for maximum overtraining :)

By peeking into the future you are cheating yourselves..... All those who want to deceive the market, like martins, etc. They cheat themselves first of all, the main thing is to realize it as soon as possible. I'm half a year when I started to decide for myself in this matter. It should be what an idiot to know that you cheat yourself and still continue to do so. I mean matinotsetnicogriders or whatever they call themselves :-)

 

What's wrong?

 
Igor Makanu:

The topic of fxsaber is really a great tool, I'm sitting and thinking, how much we need NS/forests etc. if a good model can be described by more simple means - fxsaber showed it once again ;)

Actually, both approaches, with and without MO, are equal. Actually, some idea is needed for IR, just like for a simple strategy.

 
Yuriy Asaulenko:

Actually, both approaches, with and without ME, are equal. You need some idea under the MO, as well as under the usual strategy.

I was thinking about it today... I'm tired of reading... But I convinced myself: if there is a TS for MAs, then MAs can be made with the help of NS (it's elementary....), and it means that it is still worth finishing the theoretical study of NS.... to make a MAs

))))

 
Igor Makanu:

I thought about it today... I'm tired of reading... but I convinced myself: if there is a TS on MAshkas, then the MAshkas can be made with the help of NS (it is elementary....), and therefore it is still worth to finish the theoretical study of NS.... to make a MAshkas

))))

That's kind of where I started. I was solving primitive problems with NS, like recognition of intersection of MAHs and other market stuff, which is absolutely useless in practice. But I found out, where to put a horse).

 
toxic:

Showing a backward is exhibitionism, in an algotrading environment, it should be a shame

that was for your epiphany, since calling profitable trading random is like confusing bulls and bears with exhibitionists.
 
Maxim Dmitrievsky:

yes, the topic can still be developed... there is still room for more )


ato! imho, this is a real new topic for the last many years... so to say, fxsaber took and tied together space andtime

;)

Reason: