Market etiquette or good manners in a minefield - page 32

 
paralocus писал(а) >>

You can, but it is not convenient, because each subsequent call to Comment() will "clog" the results of the previous output, as it will be performed using the same graphical coordinates as the previous one. That's why Print() is better;

paralocus, install Matkad and don't bother with the interface. Debug the mesh and implement it in your MKL.

You'll save a lot of time, even with adaptation to the new environment.

 

Just bought it yesterday. I'm installing it now, because the good MQL has really baffled me!

But the grid is learning in 3 epochs judging by the scales dump and seems to be without glitches:

------------------------------------------------------------------------------------------------------------------------

DUMP SCALES

------------------------------------------------------------------------------------------------------------------------

2009.05.22 18:17:48 Nero AUDUSD,M30: W : -0.1456 | 7.3647 | 1.1477 | 0.1959 | 0.197 |-0.1281 | -0.8441 | -0.4209 | -0.1956 | -0.044 | 0.0458 | 0.7074 | -0.1706
2009.05.22 18:17:48 Nero AUDUSD,M30: W : -0.1456 | 7.3647 | 1.1477 | 0.1959 | 0.197 | -0.1281 | -0.8441 | -0.4209 | -0.1956 | -0.044 | 0.0458 | 0.7074 | -0.1706
2009.05.22 18:17:48 Nero AUDUSD,M30: W : -0.1456 | 7.3647 | 1.1477 | 0.1959 | 0.197 | -0.1281 | -0.8441 | -0.4209 | -0.1956 | -0.044 | 0.0458 | 0.7074 | -0.1706
2009.05.22 18:17:48 Nero AUDUSD,M30: W : -0.1456 | 7.3647 | 1.1477 | 0.1959 | 0.197 | -0.1281 | -0.8441 | -0.4209 | -0.1956 | -0.044 | 0.0458 | 0.7074 | -0.1706
2009.05.22 18:17:48 Nero AUDUSD,M30: W : -0.1456 | 7.3647 | 1.1477 | 0.1959 | 0.197 | -0.1281 | -0.8441 | -0.4209 | -0.1956 | -0.044 | 0.0458 | 0.7074 | -0.1706
2009.05.22 18:17:48 Nero AUDUSD,M30: W : -0.1456 | 7.3647 | 1.1477 | 0.1959 | 0.197 | -0.1281 | -0.8441 | -0.4209 | -0.1956 | -0.044 | 0.0458 | 0.7074 | -0.1706
2009.05.22 18:17:48 Nero AUDUSD,M30: W : -0.1456 | 7.3647 | 1.1477 | 0.1959 | 0.197 | -0.1281 | -0.8441 | -0.4209 | -0.1956 | -0.044 | 0.0458 | 0.7074 | -0.1706
2009.05.22 18:17:48 Nero AUDUSD,M30: W : -0.1456 | 7.3647 | 1.1477 | 0.1959 | 0.197 | -0.1281 | -0.8441 | -0.4209 | -0.1956 | -0.044 | 0.0458 | 0.7074 | -0.1706
2009.05.22 18:17:48 Nero AUDUSD,M30: W : -0.1 | 6.0564 | 1.1419 | 0.1999 | 0.2118 | -0.11 | -0.821 | -0.4656 | -0.1458 | -0.037 | 0.0564 | 0.7267 | -0.1584
2009.05.22 18:17:48 Nero AUDUSD,M30: W : 0.1164 | 3.5091 | 1.2495 | 0.3147 | 0.3362 | 0.0168 | -0.6902 | -0.8039 | -0.0126 | 0.0312 | 0.1206 | 0.8339 | -0.0615
 

That's it! Matcad has been installed. I'll be getting to know it this weekend.

Neutron, "...and yet it spins!" The single layer works, learn. It's just that I tried to bring it up on M30, and there - you know what "patterns" are.

But on H4 - look what it does. And, interestingly enough, pay attention to the number of inputs and the number of epochs:

Here D = 7, 24 epochs:



And here D = 5, also 24 epochs:


 
Neutron >> :

No, Petya yanked all the bushes in the yard - he was asked to find the square root :-)

And Vasily Ivanovich was sharpening his checker - he was asked to divide a monominant by a polynominant...

 
paralocus писал(а) >>

That's it! Matcad has been installed. I'll be getting acquainted this weekend.

Neutron, "...and yet it spins!" . It's a single layer, it's learning. I just tried to bring it up on M30, but you know what "patterns" are there.

But on H4 - look what it does. And what is interesting, pay attention to the number of entries and the number of epochs:

Here D = 7, 24 epochs:

And here D = 5, also 24 epochs:

Congratulations on making it work!

Actually, you shouldn't make any noise on M30... If the idea works, it will work on M5 as well. And another question, wouldn't the result with M5 be higher in total than with H4. Recently I was comparing H1 and M5 (see above thread) - at M5 I gathered more than a half of what at H1 on the same 10.000 bars. And the time factor, as you know, is 12...

I have never managed to get a reasonable result on M1.

 
YDzh >> :

I recently compared H1 and M5 (see above on the thread) - on M5 I gathered on the same 10,000 bars more than half of what on H1. And the time factor, as you know, is 12...

About the "curve ratio" being equal to 12, I have great doubts: compare "lengths" of the Close curve on these TFs. Before the model experiment I thought they should differ by a factor of about sqrt( 12 ). In principle they are approximately so, but still there are differences (especially if the periods are very different). Hearst must have done some digging here. Here is the code (_bars - number of history bars on a shallow period):


extern int _grosserPeriod = PERIOD_H1;
extern int _lesserPeriod  = PERIOD_M5;
extern int _bars          = 100000;


int lesser2grosser( int sh )
{
   int dt = iTime( NULL, _lesserPeriod, sh );
   return( iBarShift( NULL, _grosserPeriod, dt, false ) );
}


double len( int from, int period )
{
   double sum = 0;
   for( int i = 0; i < from; i ++ )    
      sum += MathAbs( iClose( NULL, period, i ) - iClose( NULL, period, i + 1 ) );
   return( sum );
}


int start()
{
   double ratio = len( _bars, _lesserPeriod ) / len( lesser2grosser( _bars ), _grosserPeriod );
   double ratioSquared = ratio * ratio;
   int periodsRatio = _grosserPeriod / _lesserPeriod;
   double one = ratioSquared / periodsRatio;
   Print( "one equals " + one );
   return( 0 );
}
//+------------------------------------------------------------------+
 

Hi Alexey, I don't know what curves you're talking about? Explain, if you don't mind, please.

By the way, how are you doing with the fibs?

To YDzh I don't like timeframes at all... Not at all. I use them only because of the inevitability of this evil for me, at this level of my knowledge and skills. However, I have the results of my experiment:

My grid has only two external parameters - the number of inputs and the number of learning epochs. The grid is single-layer, self-learning, and works on a series of first differences from OPEN. So, having suspected, as Neutron says, "my girlfriend" of being strongly addicted to the superposition of these two parameters, I plugged it into the optimizer yesterday to have a look...

The optimizer racked its brains for about an hour and gave me only 8 results! It was not difficult to see the best one among them. But there are no results for TF 30. I.e. there is (within one hundred) no such a number of inputs (for this grid) and such a number of training epochs that would not allow the grid to fail. And therefore the rhetorical question "...will it or won't it be better on M5 than on H4" should be considered in the context of the General Theory of Relativity - i.e. for whom (what)... will it be? For my grid it won't - definitely not, but for yours it may be the best...

 

I'm currently working on the statistics for my 2-layer. It turns out that on quotients, the NS still better determines the sign of the expected movement if the weights are kept from epoch to epoch. According to the results of experiments, it turns out about 20% vs. 23% correct guessing. The difference, of course, is not large, but considering that this percentage goes as much as the 4th degree of profitability of TS ... - is worth a lot. The effect becomes noticeable if weights from epoch to epoch are passed through w=g*th(w), where g is a coefficient of the order of 0.005 rather than 1.

 
Neutron >> :

The effect becomes noticeable if weights from epoch to epoch are passed through g*th(), where g is a coefficient of the order of 0.005

I sense in my gut that there are quite certain "limits" to the change in weights... but I don't have enough knowledge to formulate it properly. That is, I know in this "place" that weight is not as important as the relative position of weights (of a particular neuron) on the numerical axis, if this could be somehow probed... then, if successful, we could say that if unit neuron with D inputs is trainable in principle (on the given vector), it can be trained optimally in the range of weights -/+1, or +/-U, where U = F(D). Then an entirely different, "biological" learning paradigm could emerge. My intuitive speculation on this is partly confirmed by your results on the use of g*th(). In fact, from epoch to epoch you corral all weights into a stall of some empirical value, not allowing them to scatter across the expanse of the numerical axis.

 
Cool wording!
Reason: