Machine learning in trading: theory, models, practice and algo-trading - page 1249

 
Aleksey Vyazmikin:

So I am proceeding from the fact that the subject is the same everywhere - a trader and why would he change his behavior depending on the instrument?

In your conclusions you have finally decided that there is no difference if the horse is ahead of the cart or vice versa - the cart is ahead of the horse! ))))

A trader adjusts to the market, because there is no market participant who can tell you where the price will be tomorrow, there is no market participant who has all the information about all the market participants


Aleksey Vyazmikin:
If we assume that markets are all the same, and price behavior has similar patterns, then why not combine a dozen instruments in one sample and look for common "signs" for all markets?

suppose, then in the optimizer you will always have your TS find excellent solutions (i hope we don't consider trading with stop-loss and risk-excess), but usually everything is sad, what works on one chart doesn't work on the other

Here's the general problem: simplify the model (the market) to Cartesian coordinates X and Y, based on your assumption that all markets are the same, then there is a transformation coefficient (linear or not linear), which allows you to convert some data into other - if so, then it's a problem for the NS, it solves the problem of finding the dependence of input data on output, the multiplication table has not fed the NS only lazy person here ;)

 
Igor Makanu:

Assume, then in the optimizer you will always have your TS find great solutions (I hope we are not considering trading with overshooting the loss and exceeding the risk), but usually everything is sad, what works on one chart does not work on the other


This only indicates that the neural network or whatever you are using has found another bullshit. It clings to the form, it adapts on a local area, one of millions of implementation options, but it cannot "understand" the global structure. Working on everything - there are methods, found the truth not with the help of the MO. But the main thing - to go, that is to earn, right?

If you're already stubborn in the MO, wanting to shift responsibility from their own head and eyes, to a soulless tinker, so here is a "criterion", for the final assessment.If you have found "something" on the 15 minutes of the pound, it should work about the same on the minutes of gold and on the daily-weekly of domestic "chemos", and on everything, including sugar and soybeans from the time of Larry Williams.

Sorry for being so rude (Igor Makanu: not to you personally, but in general), open your eyes and look at the charts in the book "Long-Term Secrets" ... Or Linda Raschke has some pictures, you can see there a lot of differences from the 15 minutes of the pound or the bitcoin watches :)

If you judge by the percentage of profitable trades, by the profit factor, it all depends on the instrument, time frame, the difference is plus or minus 3-4 percent, no more. With increasing time frame the stability of the patterns is falling. The most accurate is on the "ticks", whatever you test with them, the system parameters are as tight as a rock. The more time it takes to form a pattern, the more the image trembles. But the pattern structure does not "fall apart", even on MN. It is still the same pattern, the difference is in percentages.

--

There is an ironclad basis for the "sameness" of all market graphs, they cannot be different purely physically. Nor can they change over time, even by a "millimeter", unless, for example, the speed of light changes or the number of PI.

I have repeatedly given a link to the book, it says everything. And why the same and why they cannot change over time.

 
Aleksey Vyazmikin:

I had training for 2016-2017, and then just checking the sheets for 2014-2018 and selecting the ones that were profitable every year and met a number of criteria (overall growth/not a lot of drawdown). That's what I think, is it possible to use such a model.

If i want to compare different symbols, the main predictor is the gain in pips for different time intervals, but it won't work with different instruments...

What is the package or program in which you can view individual leaves and their statistical data?

Did you teach 1 tree or a forest there?
 
Wizard2018:

It only shows that the neural network or whatever you are using has found another fake one. It clings to the form, adjusts on a local area, one of millions of implementation variants, but could not "understand" the global structure.There are methods that work for everything, although not found with the help of the MO. But the main thing is to go, that is to earn, right?

If you are so much in the MO, wishing to shift responsibility from their own head and eyes, to a soulless tin, so here is a "criterion" for the final assessment.If you found "something" on the 15 minutes of the pound, it should work about the same on the minutes of gold and on the days-weeks of domestic "chemos", and on everything, including sugar and soybeans from the time of Larry Williams.

Sorry for being rude (Igor Makanu: it's not for you personally, but in general), open your eyes and look at the charts in the book "Long-Term Secrets" ... Or Linda Raschke has some pictures, you can see there a lot of differences from the 15 minutes of the pound or the bitcoin watches :)

If you judge by the percentage of profitable trades, by the profit factor, it all depends on the instrument, time frame, the difference is plus or minus 3-4 percent, no more. With increasing time frame the stability of the patterns is falling. The most accurate is on the "ticks", whatever you test with them, the system parameters are as tight as a rock. And then error is accumulated over time and patterns become more vague. The more time it takes to form a pattern, the stronger the picture trembles.

--

There are ironclad grounds for "sameness" of all market graphs- they cannot be different purely physically. Also they cannot change with time, even by "mileometer", well if only for example speed of light will change or number of PI.

I have repeatedly given a link to the book, it says everything. And why are the same and why can not change over time.

i'm sorry, i don't always have time to read all the discussions in the thread, i'll read it, there's nothing to read now anyway

 

R or Python? Why not both? Using Anaconda Python within R with {reticulate}

R or Python? Why not both? Using Anaconda Python within R with {reticulate}
R or Python? Why not both? Using Anaconda Python within R with {reticulate}
  • Econometrics and Free Software
  • www.r-bloggers.com
This short blog post illustrates how easy it is to use R and Python in the same R Notebook thanks to the package. For this to work, you might need to upgrade RStudio to the current preview version. Let’s start by importing : is an RStudio package that provides “a comprehensive set of tools for interoperability between Python and R”. With it...
 
Vizard_:

Fa, you need a normal filter like Jurica's JMA, but before '71.
Preferably not too complicated and in R. Got anything?

No.

For a long time I used the modified JMA on mcl4 in terms of period adaptation, but it is of little use: it fades as well as everything else. From time to time I had to intervene with my hands.

I use indicators as predictors, but I'm not aware of any smoothing that has predictive power for my target variables.

If it's about filters, there's a curious package called smooth. Inside the smoothing sits Kalman with state space. It gives very good quality mashups, and with extrapolation (forecast) for several steps ahead.


But again, the main thing for me is: predictive ability for the target, and everything in this package has the same problem: no predictive ability.


That's why I've given up on all the filters and smoothing and stuff.

 
Igor Makanu:

in your conclusions you have finally decided that there is no difference that the horse is ahead of the cart or vice versa - the cart is ahead of the horse! ))))

The trader adjusts to the market, because there is no one market participant who can say where the price will be tomorrow, there is no one market participant who has all the information about all the market participants

The trader is a collective image, a person who expresses the will of the money he has (physical or legal) and who influences the market by applying his skills to meet the needs of the client. That's why I say that collectively similar skills win in all instruments, not because of the knowledge or the skills themselves, but because of the amount of assets used by the interested persons. Add to this the global religion of technical analysis, which is inapplicable to most market participants (once discounted a link to the requirement of the Central Bank to know these postulates for a professional securities market participant)... Our task is to understand how the price will move depending on the selected vector of movement and to minimize the cost of entry into the transaction, the price of risk.


Igor Makanu:


Assume then in the optimizer you will always have your TS find excellent solutions (I hope that trading with outlasting losses and exceeding risks we do not consider), but usually everything is sad, what works on one chart does not work on the other

Here's the general problem: simplify the model (the market) to Cartesian coordinates X and Y, based on your assumption that all markets are the same, then there is a transformation coefficient (linear or not linear), which allows you to convert some data into other - if so, then it is a problem for the NS, it solves the problem of finding the relationship of input data to output, multiplication table has not fed the NS only lazy person here ;)

I'm working more with predictors that describe the market, and I've already got more than 300 of them, and 300 inputs is too complicated for SN... That's why I use tree-like models. In any case, the problem is to convert points to relative units for the predictors not to depend on the symbol - I do it through the daily ATR, but maybe there is a better method - I don't know. In any case, I need to try different methods of data transformation, since my training sample is small - not all variations are taken into account or present in a small quantitative form that prevents identifying the rule (forming the list).

 
elibrarius:

What is the package or program in which you can watch the individual sheets and statistics on their work?

Did you train 1 tree or a forest there?

Yes, it is very instructive to work with sampling in Deductor Studio - there you can rebuild branches or build a tree from scratch - a very good tool to understand how the tree works and to test your hypotheses. Minus of it is that package is paid and you can't just unload rules (leaves) like that...

I use a script in R to generate a tree with genetics, then a script to unload the tree data at each iteration, and then a parser program, which I wrote separately, that converts trees into leaves in this format:

            if(Test_P!=1245)if(DonProc>=5.5 && TimeH< 10.5 && Levl_High_H4s1< 1.5) CalcBuy=CalcBuy+1; //(0.22513089 0.30366492 0.47120419)
            if(Test_P!=2030)if(Povtor_Low_M1>=0 && TimeH>=10.5 && TimeH< 21.5 && BB_iD_Center_H1< 0 && Levl_Close_D1>=-2.5 && Levl_Support_W1s1< 4.5 && LastBarPeresekD_Down_M15< 4.5) CalcBuy=CalcBuy+1; //(0.09111617 0.51252847 0.39635535)
            if(Test_P!=2537)if(Povtor_High_M1>=0 && rLevl_Down_iD_RSI< -6.5 && TimeH< 14.5) CalcBuy=CalcBuy+1; //(0.1990172 0.3832924 0.4176904)
            if(Test_P!=3243)if(Levl_Close_H1>=0 && TimeH<10.5 && Levl_Support_W1<-3.5) CalcBuy=CalcBuy+1; //(0.1153846 0.1538462 0.7307692)
            if(Test_P!=3314)if(Levl_Close_H1>=0 && TimeH< 10.5 && Levl_Low_W1s1N< 4.5 && Levl_Support_W1< -3.5) CalcBuy=CalcBuy+1; //(0.1153846 0.1538462 0.7307692)
            if(Test_P!=3583)if(Povtor_Type_M1>=0 && TimeH< 10.5 && Levl_Close_W1< -3.5) CalcBuy=CalcBuy+1; //(0.11428571 0.20000000 0.68571429)
            if(Test_P!=3857)if(Povtor_Type_M1>=0 && TimeH<10.5 && Levl_Support_W1<-3.5) CalcBuy=CalcBuy+1; //(0.07142857 0.17857143 0.75000000)
            if(Test_P!=6546)if(Povtor_Type_H1< 0 && Levl_Close_H1s1N>=0 && Levl_Close_H1s1N< 2.5 && Levl_High_W1s1>=2.5 && DonProc_M15>=5.5) CalcBuy=CalcBuy+1; //(0.1228070 0.4210526 0.4561404)
            if(Test_P!=6676)if(Povtor_Type_H1< 0 && Levl_Close_H1s1N>=0 && Levl_Close_MN1< 4.5 && TimeH< 21.5 && BB_iD_Center_H1< 0 && Povtor_Type_M15>=0 && Levl_Down_DC_M15>=-2.5) CalcBuy=CalcBuy+1; //(0.10619469 0.42477876 0.46902655)
            if(Test_P!=8673)if(Levl_Close_H1s1< 0 && Levl_Close_H1s1N>=0 && Part_H4>=2.5 && TimeHG< 3 && Levl_first_W1s1>=0.5) CalcBuy=CalcBuy+1; //(0.11607143 0.40178571 0.48214286)
            if(Test_P!=8840)if(TimeHG>=1.5 && RSI_Open_M1< 0.5 && BB_Peresek_Last_M1< 0.5 && RSI_Open_M1>=-0.5 && Levl_Support_W1s1>=-4.5 && Povtor_Low_H1>=0 && Levl_Support_H4>=0 && RegressorSpeed< 1.5) CalcBuy=CalcBuy+1; //(0.1606218 0.4145078 0.4248705)
            if(Test_P!=10002)if(rOpen_WormsDown>=0 && BB_Peresek_Last_M1< 0.5 && rDeltaWorms< 2.5 && DonProcVisota< 4.5 && Part_D1< 3.5 && BB_iD_Center_H1< 0 && Levl_Close_H1s1N>=0) CalcBuy=CalcBuy+1; //(0.1890244 0.3963415 0.4146341)
            if(Test_P!=10395)if(rOpen_WormsDown>=0 && Povtor_Type_M15>=0 && Levl_Low_H1< -4.5 && Levl_Close_H1s1N>=0) CalcBuy=CalcBuy+1; //(0.1990741 0.3888889 0.4120370)
            if(Test_P!=14244)if(rPeresek_Up<0.5 && BB_Peresek_Last_M1<0.5 && Polozhenie_M1>=0 && Povtor_High_H1<-2.5) CalcBuy=CalcBuy+1; //(0.1948052 0.3506494 0.4545455)
            if(Test_P!=14462)if(rPeresek_Up<0.5 && BB_Peresek_Last_M1<0.5 && Polozhenie_M1>=0 && DonProc_M15<9.5 && Levl_Support_H4s1<4.5 && Povtor_High_H1<-2.5) CalcBuy=CalcBuy+1; //(0.2112676 0.3239437 0.4647887)
            if(Test_P!=17944)if(Levl_Low_H1s1N< -1.5 && Levl_Close_H4>=0 && Levl_Close_H1s1N>=0 && BB_iD_Center_H1< 0 && Part_H1< 2.5) CalcBuy=CalcBuy+1; //(0.1408451 0.3239437 0.5352113)
            if(Test_P!=18382)if(Povtor_Low_M15< 3.5 && LowPerekH1s1_1< 0.5 && Polozhenie_M1>=0 && BB_iD_Down_M1>=-5.5 && DonProcVisota>=3.5 && Povtor_Low_M15< 1.5 && BB_iD_Down_M1>=-1.5) CalcBuy=CalcBuy+1; //(0.1659389 0.3842795 0.4497817)
            if(Test_P!=19123)if(rPeresek_Down< 0.5 && Povtor_Low_M15< 3.5 && LowPerekH1s1_1< 0.5 && Polozhenie_M1>=0 && rCalcLvlWorms< 1.5 && DonProcVisota>=3.5 && rLevl_UpPeresek_iD_RSI< 1.5 && RegressorCalc_S1>=-1.5 && Levl_first_W1s1>=-0.5) CalcBuy=CalcBuy+1; //(0.1225490 0.4313725 0.4460784)
            if(Test_P!=26038)if(Levl_Support_H1s1>=-3.5 && Part_H4< 2.5 && LowPerekH1s1_0>=0.5 && Part_H1>=1.5) CalcBuy=CalcBuy+1; //(0.1912568 0.4153005 0.3934426)

Then I use an Expert Advisor that reads predictors from a file and financial results and applies leaves on them in optimization mode "Mathematical Calculations" and in the gut already gathers statistics by calculation of financial indicators and other statistics, which are passed from agents to Expert Advisor in frames and collected in the end in one file.


Then I look at how the list behaved in each reporting period.

 
Aleksey Vyazmikin:

Yes, it is very instructive to work with sampling in Deductor Studio - there you can rebuild branches or build a tree from scratch - a very good tool to understand how the tree works and to test your hypotheses. Minus of it is that package is paid and you can't unload rules (leaves) that easily...

I use a script in R to generate a tree with genetics, then a script to unload the tree data at each iteration, and then a parser program, which I wrote separately, that converts trees into leaves in this format:

Then I use an Expert Advisor that reads predictors from a file and financial results and applies leaves on them in optimization mode "Mathematical Calculations" and in the gut already gathers statistics by calculation of financial indicators and other statistics, which are passed from agents to Expert Advisor in frames and collected in the end in one file.


Then I look at how the sheet behaved in each reporting period.

Jeez, everything you do manually, should be done by a neuronet, and so.... It is a waste of time, and if the result is negative, you have a lot of frustration and searching for other methods.

 
Vizard_:

Hammer. Try highlighting for convenience (heat maps)

Yes, I've seen such implementations with backlighting, but it is a separate algorithm to write, and to attract kanvas, which I did not understand at all. So for now, I'm pulling it out with formulas. The map would be interesting, but it's better to use multidimensional...

Now I'm thinking, what is the aim of collecting leaves for one model, first of all I've selected stable leaves - I've already written, then the filters leaves (I have -1/0/1 classification - sale/not to enter/buy), respectively, filter can be leaves from "not to enter" group and from "buy" group for sale, and in fact they are from the "sell" group - since either the pattern was false or the covered map of leaves for market entry does not cover this surface with the entry signal, but filters well in terms of ignoring the entry. So far the criteria of filters - improved profit, drawdown decrease, decrease of number of losing entries in a row, so you can take 4-6 leaves, which in 3x/4x periods will simultaneously improve all of these indicators, and the overall performance - profit and drawdown. And then either focus on the profitability and the percentage of correct entries, or I think to try to match each leaf to enter an individual filter (1-2), which is more expensive, but should be more effective in theory.

Reason: