Market etiquette or good manners in a minefield - page 89

 
paralocus писал(а) >>
I think that training sample length cannot be just a function of network configuration (number of inputs and number of neurons), maybe some characteristic of rows we want to train the network on should be considered.

This is possible if BP is non-stationary and the correction will be determined by the nature of non-stationarity. However, we are still far away from the correction - we do not know the type of dependence of the sample length on the number of weights and the dimensionality of the NS input. Intuitively, I think that the general view will look like this: P=4*w, and this is true for any architecture of NS.

Fedor, what do you say about my "epiphany" on the problem of binary BP prediction? After all, the fact that for this special case of BP we can as effectively as NS use statanalysis as a prediction engine, thoroughly unties our hands in terms of process resource-intensiveness. I believe this is a breakthrough. Where "misunderstanding" traders spend time and effort to train five-layer NS with adaptive architecture, all they need is to gather statistics and not to squirm (as Lenin put it).

 
Neutron >> :

This is possible if BP is non-stationary and the correction will be determined by the nature of non-stationarity. However, we are still far away from the correction - we do not know the type of dependence of the sample length on the number of weights and the dimensionality of the NS input. Intuitively, I think that the general view will look like this: P=4*w, and this is true for any architecture of NS.

Fedor, what do you say about my "epiphany" on the problem of binary BP prediction? After all, the fact that for this special case of BP we can as effectively as NS use statanalysis as a prediction engine, thoroughly unties our hands in terms of process resource-intensiveness. I believe this is a breakthrough. Where "misunderstanding" traders spend time and effort to train five-layer NS with adaptive architecture, all they need is to gather statistics and not to squirm (as Lenin put it).

I tell you, Sergey, it looks more than tempting. Now I have withdrawn from networks, because I am busy with ticks and have avoided thorough Expert Advisors on grids with binary inputs so far. But now I have some free time, so I intend to return to the experiments with grids, including binary inputs. If you can do without grids at all... it really is a breakthrough. I remember Prival, and not only Prival, suggested the idea of a patterns database. If a statistical model allows creating such a database, the Expert Advisor would be childishly simple but maturely effective. By the way, maybe we should not move away from grids at all, the grid should just change its task - instead of forecasting the next report, it can deal with pattern recognition - imho the task is much more perspective and fits very well for fuzzy logic systems. What do you think?

 
Neutron писал(а) >>

...to use statanalysis as a forecasting machine, which would free up our hands considerably in terms of the resource-intensiveness of the process. I think it's a breakthrough...

Why not use:

- Ass. rules or

- Cohonnen?

They give the very probability and support.

 

That's the spirit! Haven't been here in a while. It's 90 pages already.) When you guys manage to make grids too, it's unbelievable.)

 
M1kha1l писал(а) >>

Why not use:

- Ass. rules or

- Cohonnen?

They give the very probability and support.

Why?!

After all, in the case of binary input data, this work may just as well be solved by statanalysis patterns. Judge for yourself - no troubles with training and searching for the optimal architecture. As paralocus rightly pointed out, "The Expert Advisor would be childishly simple but maturely efficient"!

 

Having embarked on experiments with binary inputs, there is one question. I already asked it once, but I'll repeat it:

If I feed the inputs with signs of the first PT difference series, then I should predict exactly the enak of the next increment...

Below is my single-layer code (started with it so far) and this code takes amplitude, not sign, as an error at the grid output, although it is the sign that is used in the OUT calculation itself. Is it correct? Shouldn't the error also be a sign, or at the very least (of course, there's nothing to do with the thin end...) an amplitude sufficient to get the appropriate sign at the output?


 
paralocus писал(а) >>

I tell you, Sergey, it looks more than tempting. Now I have somewhat distanced myself from the grids themselves, because I took up ticks and have so far avoided fundamental Expert Advisors on grids with binary inputs. But now I have some free time, so I intend to return to the experiments with grids, including binary inputs. If you can do without grids at all... it really is a breakthrough. I remember Prival, and not only Prival, suggested the idea of a patterns database. If a statistical model allows creating such a database, the Expert Advisor would be childishly simple but maturely effective. By the way, maybe we should not move away from grids at all, the grid should just change its task - instead of forecasting the next report, it can deal with pattern recognition - imho the task is much more perspective and fits very well for fuzzy logic systems. What do you think?

I am amazed :) You are kind of smart guys, but look at the half-way. At the input of expert in this scheme must be the same database with pattern statistics. Well, how many patterns do you plan to put there?

All of them? Good luck ... :) And if not all - how many exactly? And how is this task easier than the NS?

The expert will certainly be quick in the end. Only you have to feed it with "truths" from the database, which, by the way, are not true when tested. No Fuzzy will help to extract jam of the future from the trash past (DB).

// Wapcheta, to be honest - I checked. About a year and a half ago.

// Only the scheme was more elegant. After some careful and creative thinking I worked out the following way: if at a given moment I // have to make a decision on a patent, I can't make it.

// I need to make a decision on a pattern I have in the present tense, I don't need a gigabase with

// a gigabase with a bunch of irrelevant patterns??? I take the current pattern right now from the stove and run it back through history,

// gathering statistics as I go along. After collecting it, I immediately use it. The dimensionality of the problem decreases by x^n

// times, where n=number of patterns in the database. Huh.

// Did it. Got the results. On the whole

// Negative, though I have discovered some meta-law along the way. I won't tell you which one, I'm sorry to say. Because...

// a not at all obvious regularity, it has to be seen. So you do it yourself. Good luck. (without irony)

// I will summarize once again: you will not get the expected result in this scheme. But you can get a kind of "satori".

// It may give you some clues to understand the nature of the market as a Learning Meta-System. Which is GOOD.

 
MetaDriver >> :

>> that's hilarious.)


Was Mogit in a hurry? He who laughs last, the cowboys say...

I don't think there's going to be a big DB here. At most a hundred and a half patterns, and most likely even less. You must have been running your system by candlelight...

 
paralocus писал(а) >>

Having embarked on experiments with binary inputs, there is one question. I already asked it once, but I'll repeat it:

If I feed the inputs with signs of the first PT difference series, then I should predict exactly the enak of the next increment...

Below is my single-layer code (started with it so far) and this code takes amplitude, not sign, as an error at the grid output, although it is the sign that is used in the OUT calculation itself. Is it correct? Shouldn't we use as an error also a sign, or at the worst (of course, there is nothing to do with the worst end...) the amplitude is sufficient to get the appropriate sign at the output?

Here is the case: The main reason of transition to binary BP is possible refusal of procedures of rationing and whitening of inputs for NS and the most important - transition from continuous analyzed value (infinite number of values) to binary, taking only two values +/-1. This significantly saves computational resources. The network itself is trained by ORO method and for that it generates error defined on area of real numbers (not discrete), so, paralocus, giving input +/-1 you will get on output value in range from -1 to 1 with step 10^-8. And only when the network has finished training, you should use the sign of predicted movement for prediction, and its amplitude, will be proportional to probability of correct prediction (amplitude is always positive). This probability can be used for additional analysis in MM block.

The whole advantage of NS in comparison with other methods of BP analysis is the possibility of non-analytical (not explicit) construction of multidimensional surface in the space of "very scarce" input data, to which "attract" values of the initial BP (the first difference of kotir). In the case of binary representation, we deal with a hyper-surface, which is degenerated into a multidimensional hyper-cube. This is like a crystal, which has no surface, but there are nodes of its crystal lattice (figuratively speaking), we know coordinates of these nodes up to the minute and we don't need to spend our resources on surface detection... The network is unnecessary.

Reason: