
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
In mql it goes something like this.
if(h<3) { h=pow(4./3./n,0.2); }
There is an error here. If h is not given, it is calculated by the formulah=pow(4./3./n,0.2). If specified as an input parameter p_h, then h = p_h. In Matlab, the variable that measures the number of inputs is called nargin. If nargin<3, then it means that only the first two inputs are specified when the function is called, x and y. In this case we calculate h using the formula.
if(h<3) { h=pow(4./3./n,0.2); }
There is an error here. If h is not given, it is calculated by formulah=pow(4./3./n, 0.2). If specified as an input parameter p_h, then h = p_h. In Matlab, the variable that measures the number of inputs is called nargin. If nargin<3, then it means that only the first two inputs are specified when the function is called, x and y. In this case we calculate h using the formula.
I came across an article and it seemed to be on topic. The file is in the attachment.
Here is the sentence
We report evidence of a deep interplay between cross-correlations hierarchical properties and multifractality of New York Stock Exchange daily stock returns.
I came across an article and it seemed to be on topic. The file is in the attachment.
Here is the sentence
We report evidence of a deep interplay between cross-correlations hierarchical properties and multifractality of New York Stock Exchange daily stock returns.
I find this article difficult to understand, but thanks anyway.
I wondered about the distribution of positive and negative market price deviations. It was discussed here once and the conclusion was that negative deviations are stronger than positive deviations. I'll try replacing the single line regression with two lines, one for positive input values and one for negative input values. I'll see what happens.
I find this article difficult to understand, but thanks anyway.
I wondered about the distribution of positive and negative market price deviations. This was discussed here once and the conclusion was that negative deviations are stronger than positive ones. I'm going to try replacing the single line regression with two lines, one for positive input values and one for negative input values. I'll see what happens.
Models come in two types:
RandomForest forests are very common for classification, they work very well for source data with a lot of variables that have a small number of dimensions. Usually 50-100 measurements are enough. And variables can be several tens of thousands.
Rattle is very handy for approximate calculations. You mentioned Matlab, so for you Rattle is a day's work and 6 models of both types become available, both regression and classification models. There you can also look at correlations, discard some variables, estimate the result..... Get a lot of interesting information about your variables. By the way assess the importance of the variables.
Spend time on Rattle - you won't regret it, especially at your stage and your goals - just to see what you get.
Would you like to add this data to your classifier: "Meta COT project - new horizons for analysis of CFTC reports in MetaTrader 4 terminal". It would be interesting to see how strong a predictor they are.
Looked at the article but didn't understand the data. I would love to try new data, but tell me exactly what kind and where to get it. The data should be at least up to 1980 (ideally if from 1960). The RMS of my system's predictions decreases if you move the beginning of the history to the right and its predictions become worse than random predictions if the beginning of the training history is moved after 1980. This is most likely due to a reduction in the number of past predictions on the basis of which the best predictors are chosen for the period under test since 2000. I am already pumping up the put/call ratio for the S&P 500 but this data is discarded by my system as it starts from 1995 and the system starts to learn from 1960. All data that starts later than 1860 is automatically discarded.
Unfortunately the requirement for 1960 depth is very tough. COTs, as they began to be published now began to be assembled in the late 80's (the good old eighties).
Try putting together a model that takes less history to learn. COT comes out once a week, put/call ratios generally available at the end of each day. I.e. the total number of measurements for such data could be even more than for monthly reports.
If anything, the data is here: http://www.cftc.gov/MarketReports/CommitmentsofTraders/HistoricalCompressed/index.htm
How can we predict the crash caused in May 2010 by a robot error (everyone has come to this opinion) and the euro collapsed by more than 1000 (!) pips or the crash caused by the franc's behavior in January?
That is why a crash is a crash that happens IMMEDIATELY! :)
A crash caused by the algorithm is an algorithm error, it occurs rarely and can be corrected by analyzing the situation and the algorithm itself.
But crash happens every day, any sharp rate change from the equilibrium state can be considered from the point of view of a crash.
Such a crash is caused by the crowd's behavior and has its harbingers. Everyone is looking for them.