Machine learning in trading: theory, models, practice and algo-trading - page 372

 
Maxim Dmitrievsky:


http://sci.alnam.ru/book_dsp.php

but there is no picture on page 126


not a picture...

And save the example as a picture and upload it here

 
Oleg avtomat:


not a picture ...

And save the example as a picture, and put it here


It?

 
Maxim Dmitrievsky:


Is it?



the book is the one.

pp. 126

Example 5.4.

 
Oleg avtomat:


the book is the one.

pp. 126

Example 5.4.


Yes, did not understand at once ..., here


 
Maxim Dmitrievsky:


Yeah, I didn't get it right away..., here you go



now okay ;)
 
Dimitri:


There can be no dependence where there is no correlation. Correlation can be linear or nonlinear, but it will be if there is dependence.

There may be a correlation when there is no correlation - a false correlation.

I have not deleted a single post in this thread.

Bendat J., Pearsol A.

Applied Random Data Analysis: Translated from English: World, 1989.

On p. 126

EXAMPLE 5.4. UNCORRELATED DEPENDENT RANDOM VARIABLES.


 
Excerpt from Reshetov's article explaining the work of his RNN.

"This article discusses in detail the problem of neural network retraining, identifies the reasons for its occurrence, and proposes a way to solve this problem.

1. Why a neural network is overtrained?

What is the reason for neural network retraining? Actually, there could be several reasons:
  1. The number of examples in the training sample is not enough to solve out-of-sample problems.
  2. The input data is unevenly distributed by the degree of correlation to the output data in different samples, which is very often the case when processing non-stationary data. For example, in a training sample, the correlation of any input parameter or several input parameters with respect to output values is significantly higher than in a test sample, or worse, correlation coefficients in different samples differ in sign. It is easy to check this by calculating correlation coefficients for all parameters in different samples before we train the neural network. And to get rid of this drawback is also quite simple, namely training examples decompose into samples randomly.
  3. Input parameters are not associated with output parameters, i.e. there is no cause-effect relationship between them - they are non-representative, and therefore there is nothing to train the neural network. And checking for correlations between input and output data will show a correlation close to zero. In this case you need to look for other input data, on which to train the neural network.
  4. The input data is highly correlated with each other. In this case you must leave the input data with maximum correlation to the output data, removing the other data that correlate well with the remaining data.
All of the above reasons for overtraining and the methods of eliminating them are common knowledge, since they have been previously described in various literature or articles on neural network technology. "
Files:
RNN_MT5.zip  223 kb
 
Aliosha:

Dmitry, I'm sorry, but I suspect you are either trying to troll me, or fooling around, or just stupid, with all due respect... Can not you see on a trivial example that two traits both have zero correlation with the target, BUT both are significant, neither can be dropped, linear dependence is zero, not linear 100%, that is, the correlation can be zero and the dataset is completely predictable, which your statement:

completely refutes.


Of course I'm being silly!

I have clearly written in this thread: "I will be honest and frank - I have put my diagnosis of NS a couple of years ago and abandoned this method.So how exactly for the NS - it is difficult for me to say. May be there is something in NS, which allows you to shove everything at hand without pre-selection.For all methods DM approach I have laid out."(с)

If I've written several times that I do not understand the NS and do not know how things work there, and something appears that starts yelling and screaming and citing examples from the NS - what complaints to me?


I wrote clearly and frankly:

1. dimensionality will decrease.

2. about the accuracy of the model - I DON'T KNOW!


But still there will be someone who will start stupid....

 
Mihail Marchukajtes:
Correlation of variables does not mean the possibility of prediction. Pairs can be correlated. It means that they are correlated but it is impossible to forecast one of them via the other because they change simultaneously and not anticipating. This is if we talk about correlation!!!!


Don't be stupid.

If you really want to fool around, google, e.g., steam trading.

 
Alesha:
Lie again, there is no non-linear correlation correlation is a STRICTLY defined mathematical structure, like addition or cosine, study at least wikipedia before talking nonsense.


Let's go through it like school, from the basics. What is "nonlinear correlation" and how it is calculated:

http://metr-ekon.ru/index.php?request=full&id=412

Reason: