Neuromongers, don't pass by :) need advice - page 4

 
hrenfx:

what was the sequence of steps taken to obtain the timetable at the start of the branch?

The Expert Advisor forms a file of patterns. The patterns for the last 2 years are taken for training.

The retraining is done once a month. And so for the whole story.

Figar0:

to try to give any advice on how and what to improve, you need to understand how it works in general?

Ask, I just won't answer uncomfortable questions.

For example, how do you preprocess the inputs or how do you select the learning result?

HP Filter. No way :) . Can you be more specific here. What do you mean by result selection?

How does this ESN of yours look like, on the fingers...

A black convenient box. Just stuff it with neurons with connections, teach it and enjoy your life. By the way, there are no feedbacks.

And about the majors, I didn't "warn" you for nothing, they're so cryptic...

There is another hypothesis, let's see from the tests, you may be right. In any case, so far all the results are better than a random discovery.
 
hrenfx:

Are we going to have a pissing match here? Do you, in particular, want to discuss with me what you care about in the topic raised? If not, I'll pass. Yeah, I'll ask you again:

There is training - followed by an OOS test. Next comes the training, followed by the OOS.

The aggregate balance graph is the glued sections of the OOS. That is, the entire balance graph is the work of the TC on an unknown area for it. The TC operates with positive OOS on an area unknown to it. A neural network works on data that is unknown to it. The neural network sees the data for the first time in its life and works in the +. I do not know how else to explain it better.

Shit, mice are not capable of that. Show me the same "trick" with wizards (in quotes, because it's not a trick, man), and I'll order you a monument in my lifetime (at least I can do it myself in 3D model, I can).


ZS. Reread my post again. I realised that it was too abrupt - I'm sorry, I didn't mean it.

 

By the way, you can ask joo about pre-processing. If he wants to, he'll tell you.

Basically, it's nothing supernatural.

Andrei, come on in.

 
TheXpert:

By the way, you can ask joo about pre-processing. If he wants to, he'll tell you.

Basically, it's nothing supernatural.

Andrei, come on in.

Hello, my namesake.

I'm glad the results are at least better than from random inputs.

I think since you started this thread, it means something's eating at you. Something's missing, for emotional balance, so to speak. Perhaps it's because the results are too discouraging, unexpected, shocking in a way.

Yes, there is nothing supernatural in preprocessing, in representation of data for a neural network, everything is simple, and probably logical. Everything was said and explained by me earlier in the concepts of Flowing Patterns and the second type of TS. I would not like to speak on this subject in more detail, those who wish to do so will find all the information on this forum.

So it seems to me, having achieved positive MO, we can safely give up the bliss of searching for the best MM for TC. But. We need to remember,

firstly, the current results, positive by the way, are obtained using the default settings of patterns taken from the head for imperative reasons, and perhaps having changed them, you can get even better results.

secondly, the way of trading on the predicted "tail" is not perfect, which does not fully fit the concept of the second type of TS.

Third, I think that on the contrary, we should move away from "majors" and switch to pairs with a specific pattern, which pattern is visible to the naked eye, at least from my point of view. Those are pairs like GBPJPY and some others. It may well be that the results will still improve due to a more distinct recognition of the characteristic patterns of these pairs, when the majors look more like a random rambling in their pattern.

Lots of bukaf. Sorry.

 
joo:

Training is conducted - followed by an OOS test. Next comes the training, followed by the OOS.

Yes it was clear from the start, like forward gluing. This is an EA with auto-optimisation - you can read it as with overtraining (same thing).

The question was different. How did it happen? Here was written the NS. Why did the author choose an optimization sliding window of 25 months and a forward window of a month? Why not other parameters? If there are other parameters, then how is it different from the fact that there were just found sizes of windows, at which they do not drain?

Shit, the wizards can't do that. Show me the same "trick" with the dummies (in quotes, because it's not a trick, man), and I'll order you a monument in my lifetime (at least I can do it myself in 3D models, I can).

The mash-ups are just any indicators. For example, the same trained NS can be thought of as an indicator with a huge number of input parameters. The window was shifted, parameters were reoptimized (not necessarily by max profit, but also by other characteristics), we look further and so on.

The approach itself is the same. It's just that the MA is quite primitive. The NS is not quite primitive. But again, the indicator concept is the same in both cases.

Somehow indicators and NS are created, but it is all some kind of mathematics from primitive to more complicated. But mathematics-mathematics-mathematics, but some conceptual model of the market should be set. Otherwise everyone (myself included) fiddles with various complex mathematical twists with financial BP, and if it works, we can't explain it to ourselves. Like, we have found a pattern. We reason not as technicians, but as humanitarians: if it works, it means that there is a pattern. We are unable to grasp the reasons. Our own systems are like a black box for ourselves.

 
hrenfx:

Asked differently. How did it happen? Here was an NS written. Why did the author choose an optimisation sliding window of 25 months and a forward window of a month?

A month is convenient, 25 is logical. Tested it on 10 15 20 .... 40 -- all over the place. Time to move on from criticism to advice, my patience is not ironclad.

The approach itself is the same.

Yeah, except for some reason the inertia on the forward is not there.
 

The market concept of flowing patterns does not make sense to me at all. I just can't explain to myself why market participants in the aggregate should trade in patterns that have a lifetime of not a few. To explain the market as a psychology of a crowd - is somehow strange, because there is also a question of maximizing profits and others. All in all, it is clear that nothing is clear.

 
hrenfx:
.....

Somehow indicators and NS are created, but it is all some kind of mathematics, from primitive to more complex. However, maths is maths, but there has to be a conceptual model for the market. Otherwise everyone (myself included) fiddles with various complex mathematical twists with financial BP, and if it works, we can't explain it to ourselves. Like, we have found a pattern. We reason not as technicians, but as humanitarians: if it works, it means that there is a pattern. We are unable to grasp the reasons. Our own systems are like a black box for ourselves.

The beauty of the situation is that a general theory describing the market was formulated first. It doesn't matter whether it was cheesy or complete. The important thing is that first there was theory, then there was practice to confirm the theory. This is the disconcerting reality - it all came together.
 
TheXpert:

What do you mean by result selection?


Judging by the speed of preparing tests with such a long period and a lot of retraining, it's all automated within the DLL itself. How many parameters/weights are trained within the network itself, what is the criterion for stopping the training (number of epochs, reaching an acceptable error on the test sample)? Does increasing the time make any difference? Learning period in my opinion is too long for 15M, I have enough for a year for 1H, have you tried making it shorter?

joo:

I'm glad the results are at least better than from random inputs.

Interesting phrase, why "random" inputs were used, can you explain in a nutshell?

 
TheXpert:

It's time to move from criticism to advice, I have no patience.


I've been struggling with this problem for a couple of years already) Some improvements, but only pennies and crumbs, and considering I know my grid inside out. The only qualitative leap was once I figured out how to improve the training system. And that's why I advise you to think in that direction.

And so inputs (super secret of neural networkers) change here and there - pennies; tweak architecture - crumbs....

Z.I. Could you post a full OOS test, for example, for just last March? I'll try to see how it compares to mine.

Z.I.2.

TheXpert:

Echo Network :) Doesn't matter though. Pretty sure I could get similar results with, say, FANN, only with more work.


So, according to you, it's not about the type of NS. What is it about? I agree in principle, but what is the secret of a capable NS, even if I have one in general, I can't formulate....

Reason: