How do you properly compare two non-overlapping rows?

 


There are two non-overlapping rows that are on 'different levels' (like in the picture above).

How can they be 'combined' so that they are side by side and overlap?

You can calculate an average in each row, then row_1 = value_1/mean_1, etc. But is this the right way to do it? Does the sample size affect the adequacy of the results... or should it be done differently? Or through normalisation of Max and Min ? Again sampling period? Actually what is the right way?

I think you know what I mean...

 
Evgeniy Chumakov:


There are two non-overlapping rows that are on 'different levels' (like in the picture above).

How can they be 'combined' so that they are side by side and overlap?

You can calculate an average in each row, then row_1 = value_1/mean_1, etc. But is this the right way to do it? Does the sample size affect the adequacy of the results... or should it be done differently? Or through normalisation of Max and Min ? Again sampling period? Actually what is the right way?

I think you know what I mean...

And what exactly do you mean by "comparing" these rows?

If shifting by mean values, then scaling seems to make sense to me by RMS.
 
PapaYozh:
What exactly do you mean by 'comparing' these series?


The best way to fit the rows into a 'single plane' is to ensure that the sampling has the least impact on the result.

 

Zhenya, what are the units, what are the curves? Why do they need to be combined? To make it easier to understand.

 
Evgeniy Chumakov:


There are two non-overlapping rows that are on 'different levels' (like in the picture above).

How can they be 'combined' so that they are side by side and overlap?

You could calculate an average in each row, then row_1 = value_1/mean_1, etc. But is this the right way to do it? Does the sample size affect the adequacy of the results... or should it be done differently? Or through normalisation of Max and Min ? Again sampling period? Actually what is the right way?

I think you know what I mean...

Bring the beginnings of the two graphs to zero.

Then they will intersect it and each other accordingly.

Again, this is if the dimensions are equivalent.
I wonder why, I thought that bifurcation and futile attempts to catch divergences have long since died out.
 
Evgeniy Chumakov:

There are two non-overlapping rows that are on 'different levels' (like in the picture above).

How can they be 'combined' so that they are side by side and overlap?

You can calculate an average in each row, then row_1 = value_1/mean_1, etc. But is this the right way to do it? Does the sample size affect the adequacy of the results... or should it be done differently? Or through normalisation of Max and Min ? Again sampling period? Actually what is the right way?

I think you know what I mean...

Option 1: Normalize both series = remove the constant component from each series = find the mean value and drop each point by that value

Option 2. Construct a graph of the difference and normalize it

 
Evgeniy Chumakov:


The best way to fit the rows into a 'single plane' is to ensure that the sampling has the least impact on the result.

Compare the ratios of their short and long waving. Sort of like a macd, only a ratio instead of a difference.
 
Evgeniy Chumakov:


There are two non-overlapping rows that are on 'different levels' (like in the picture above).

How can they be 'combined' so that they are side by side and overlap?

You can calculate an average in each row, then row_1 = value_1/mean_1, etc. But is this the right way to do it? Does the sample size affect the adequacy of the results... or should it be done differently? Or through normalisation of Max and Min ? Again sampling period? Actually what is the right way?

I think you know what I mean...

Why do they have to be combined? What difference does it make which graphs take the data at all? You will still be using arrays of data. For example:

https://www.mql5.com/ru/docs/standardlibrary/mathematics/stat/mathsubfunctions/statmathcorrelationpearson

bool  MathCorrelationPearson(
   const double&  array1[],  // первый массив значений
   const double&  array2[],  // второй массив значений
   double&        r          // коэффициент корреляции
   )



Документация по MQL5: Стандартная библиотека / Математика / Статистика / Вспомогательные функции / MathCorrelationPearson
Документация по MQL5: Стандартная библиотека / Математика / Статистика / Вспомогательные функции / MathCorrelationPearson
  • www.mql5.com
MathCorrelationPearson(const double&,const double&,double&) - Вспомогательные функции - Статистика - Математика - Стандартная библиотека - Справочник MQL5 - Справочник по языку алгоритмического/автоматического трейдинга для MetaTrader 5
 
Evgeniy Chumakov:


There are two non-overlapping rows that are on 'different levels' (like in the picture above).

How can they be 'combined' so that they are side by side and overlap?

You could calculate an average in each row, then row_1 = value_1/mean_1, etc. But is this the right way to do it? Does the sample size affect the adequacy of the results... or should it be done differently? Or through normalisation of Max and Min ? Again sampling period? Actually what is the right way?

I think you know what I mean...

By minimum standard deviation they usually combine. It's called linear regression, the method of least squares.

 
CHINGIZ MUSTAFAEV:

Bring the beginnings of the two graphs to zero.

Then they will intersect it and each other accordingly.

This again, if the dimensions are equivalent.
I wonder why that's necessary, I thought the bifurcation and futile attempts to catch the divergence had long since died out.

Then they will fly in different directions and will not intersect in the near future

 
Evgeniy Chumakov:


There are two non-overlapping rows that are on 'different levels' (like in the picture above).

How can they be 'combined' so that they are side by side and overlap?

You can calculate an average in each row, then row_1 = value_1/mean_1, etc. But is this the right way to do it? Does the sample size affect the adequacy of the results... or should it be done differently? Or through normalisation of Max and Min ? Again sampling period? Actually what is the right way?

I think you know what I mean...

Time series standardization - all statistical packages have it

Reason: