Question on neural network programming - page 5

 
And then, such a network configuration is capable of solving any problem, including "exclusionary or" :)
 
Ok. But I'd still like to know a bit more about the practice: what do we feed to the input, what dimension N should be set, etc.? Unless, of course, it is not a secret :) I am a dummie in this business, but I am ready to join.
 
rsi:
Ok. But I'd still like to know a bit more about the practice: what do we feed to the input, what dimension N should be set, etc.? Unless, of course, it is not a secret :) I'm a dummie in this business, but ready to join.

I have laid out the indicator above. The angle of slope of the linear regression is fed to the input. Try to run it on Eura 1 hour
 
Thanks, I'll go have a look :)
 
rsi:
Thanks, I'll have a look :)

I'd like to add a Z-account to this thing and that would be good :)
 

Yeah. It's a funny thing. Thanks again, klot, I've had enough for the weekend :)

 
This is a neuro from metaquotes. Take a look and see if it helps. Predicts reversals well.The direction is only tolerable for short periods of 2-5bar.
 
AAAksakal, look at what?
 
klot:


And in general, any NS can be easily programmed directly in MQL4. You can also select the weights of the NS using the MT4 GA or your own. Pessimism is defined only by the lack of imagination and fantasy. Basically, there are no limits...

Pessimism is defined by limitations of the strategy tester, i.e. if the range of input values is large or the number of these very values exceeds the limit, the optimizer refuses to start. So there are limits after all.

Today I finally completed building a neural network written entirely in MQL4 with 3:3:1 architecture (three neurons at the input, three hidden inputs, one output). All the layers are configured using tester GA. But the trouble is that for 1 layer you need at least 12 input parameters, at least with values from -1 to 1 in steps of 1 (like Rosenblatt). But the optimizer can't handle that many. I had to wriggle out and simplify the first layer.

In contrast to someone else's mesh, the self-made one is better in that it can be upgraded. For example, besides making the first layer non-standard, I added dynamic normalization of input data.

Signals at the inputs are quite primitive:

   static  int  p = 12;
   ...
   double       z1 = Close[0] - iMA(Symbol(), 0, p, 0, MODE_SMA, PRICE_OPEN, 0);
   double       z2 = Open[p] - iMA(Symbol(), 0, p, 0, MODE_SMA, PRICE_OPEN, p);
   double       z3 = Open[p * 2] - iMA(Symbol(), 0, p, 0, MODE_SMA, PRICE_OPEN, p * 2);

Despite the primitiveness mentioned above, the grid proves to be very trainable, i.e. weights and thresholds are easily chosen so that test results turn out to be without a single error (no profit factor). But after such a fitting, the forward test immediately starts plummeting on the spread. I had to fiddle with the trading strategy, so as not to allow the grid to adjust.

It was worth the effort, although it made my brain turn inside out:

These are the results of the test. From 1 to 273 trades - optimisation, further forward test.

And here's the forward test:

Here are the results of the forward test:

Strategy Tester Report
RNN
Alpari-Demo (Build 409)

SymbolEURUSD (Euro vs US Dollar)
Period1 Hour (H1) 2011.10.24 00:00 - 2012.01.13 23:59 (2011.10.24 - 2012.01.14)
ModelBy open prices (only for Expert Advisors with explicit bar opening control)
Parameterst1=54; t2=4; t3=48; x1=194; x2=128; x3=68; y1=1; y2=1; y3=-1; t4=136; sl=900; lots=1; mn=888;

Bars in history2431Modelled ticks3862Simulation qualityn/a
Chart mismatch errors0




Initial deposit10000.00



Net profit14713.00Total profit40711.60Total loss-25998.60
Profitability1.57Expected payoff88.10

Absolute drawdown2721.60Maximum drawdown4800.00 (39.74%)Relative drawdown39.74% (4800.00)

Total trades167Short positions (% win)101 (67.33%)Long positions (% win)66 (92.42%)

Profitable trades (% of all)129 (77.25%)Loss trades (% of all)38 (22.75%)
Largestprofitable trade900.00losing transaction-907.20
Averageprofitable deal315.59losing trade-684.17
Maximumcontinuous wins (profit)13 (2557.00)Continuous losses (loss)4 (-3605.40)
MaximumContinuous Profit (number of wins)3511.60 (11)Continuous loss (number of losses)-3605.40 (4)
Averagecontinuous winnings4continuous loss1





The most interesting thing is that even from the chart we can see that the optimization section is worse than the forward section. This rarely happens. Although I have selected this one as the best forward out of many others, i.e. other forwards have much worse results than the optimization ones but nevertheless they have the best ones.

 
Reshetov:

Pessimism is determined by the limitations of the strategy tester, i.e. if the input value ranges are large or the number of these same values exceeds the limit, the optimiser refuses to start. So there are limits after all.

Today I finally completed building a neural network written entirely in MQL4 with 3:3:1 architecture (three neurons at the input, three hidden inputs, one output). All the layers are configured using tester GA. But the trouble is that for 1 layer you need at least 12 input parameters, at least with values from -1 to 1 in steps of 1 (like Rosenblatt). But the optimizer can't handle that many. I had to wriggle out and simplify the first layer.

In contrast to someone else's mesh, the self-made one is better in that it can be upgraded. For example, besides making the first layer non-standard, I added dynamic normalization of input data.

Signals at the inputs are quite primitive:

Despite the primitiveness mentioned above, the grid proves to be very trainable, i.e. weights and thresholds are easily chosen so that test results prove to be without a single error (no profit factor). But after such a fitting, the forward test immediately starts plummeting on the spread. I had to fiddle with the trading strategy, so as not to allow the grid to adjust.

It was worth the effort, although it made my brain turn inside out:



I made a normal grid of 256 inputs, one hidden layer per 256 neurons. And an output layer of one neuron. And I trained it all perfectly in MT4

There was an option with three hidden layers, but they were unnecessary

Reason: