From theory to practice - page 155

 
Alexander_K2:
Yes, the sample size is different for each pair. Exactly. This is very important. Even Vizard_, in my opinion, does not fully grasp this point. And the answer is simple - the wave function of each pair is different, so to say, named.
Well, by analogy, like the picture above, can you post a list of your pairs with sample volumes for each of them?
 
Yuriy Asaulenko:

Why so many questions? Comrade takes the sampling period of the price - 1s. Then all sorts of statistics. If anything has not changed during this time)). Already the author has become Wizard.))

What I really don't understand is the exponential time. What for? There are standard approaches with weighting factors that reduce the weight in the statistics of old data.

With exponential time (counts) we just throw out some of the info between counts, and as the window moves, that info starts to flicker - counts appear, disappear, and reappear, etc.

Once again - without Wizard I would have come to these charts much later. Although I would have. He noticed it and just helped out a bit. Apparently, he got tired of waiting.

And here's the thing - this algorithm is already good and we'll see the results. If the results are positive, I'll stay on this algorithm forever. But, once again - Feynman has explicitly argued that on the basis of knowledge of the wave function and its amplitude one can and should predict the motion of particles. And Wizard is sitting on neural networks, it's just obvious. So, if something is suddenly wrong, we will smoothly move to the next branch and start to teach everyone there and to interfere between phrases :)))))

 
ILNUR777:
So, similar to the picture above, can you post a list of your pairs with sample volumes for each of them?
Yes - I will post it tonight.
 
Yuriy Asaulenko:

Why so many questions? Comrade takes the sampling period of the price - 1s. Then all sorts of statistics. If anything has not changed during this time)). Already the author has become Wizard.))

What I really don't understand is the exponential time. What for? There are standard approaches with weighting factors that reduce the weight in the statistics of old data.

With exponential time (counts) we just throw out some of the info between counts, and as the window moves, that info starts to flicker - counts appear, disappear, and reappear, etc.

The only thing the author is right about is that nobody does that).

I suspect in order to get the inverse of the exponent and then compare the linear relationship between the normal and the resulting distribution
 
Renat Akhtyamov:
I suspect that in order to get the inverse of the exponent and then a linear relationship between the normal and the resulting distribution

The problem is that by adding randomness where there was none before, we are increasing entropy, not decreasing it.

If a deterministic method were proposed, there would be no problem.

There, the step of measurement depends on the past data according to a formula. Like in Kagi or Renko, but a bit more complicated.

Either from the accumulated volume, or from the accumulation of volume on a buy when the market is falling.

I hope the examples are clear. But a random sequence is overkill.

 

Sample volumes:

AUDCAD = 19600

AUDCHF = 14400

AUDJPY = 12100

AUDUSD = 14400

CADJPY = 16900

CHFJPY = 19600

EURAUD = 32400

EURCAD = 44100

EURCHF = 16900

EURGBP = 14400

EURJPY = 22500

EURUSD = 16900

Well, this is for processing an archive of 200,000 quotes, we need at least 1,000,000.

 

I wonder what is so special about EURCAD that it supposedly needs 3 times as much sampling)

All the crosses are similar in volatility and trendiness, there are differences, but not by times.

It smells like a fiction. Alexander, where did such a difference in calculations come from?

 
bas:

I wonder what is so special about EURCAD that it supposedly needs 3 times more sampling)

All the crosses are similar in volatility and trendiness, there are differences, but not by times.

It smells like a fiction. Alexander, where did such a difference in calculations come from?

I don't know yet. Tomorrow I will double-check with new data.
 

Dear programmers, why is it that although I have a check everywhere when I open an order

{
         RefreshRates();
         total_orders_USDCHF=OrdersTotal();
         if(total_orders_USDCHF==0)
         {
         Balance=AccountBalance();
         Lots=NormalizeDouble((Balance/10.0)*0.01,2);
         double BidNorm=NormalizeDouble(Bid,Digits);
         ticket_sell_USDCHF=OrderSend("USDCHF.I",OP_SELL,0.01,BidNorm,0,0,0);
         }

but I still have 4 orders opened simultaneously on different pairs?

Is it because of this?

{
EventSetMillisecondTimer(314);
}

This parameter is the same for all EAs.

 

Trainees! Where are the reals? No kidding and no jokes...

Reason: