Better NN EA development - page 60

 

Example in SPL:

SPL - The SPL Programming Language

load "fann";

var ann = fann_create_standard_array(3, [2, 10, 1]);

for (var i=0; i<1000; i++)

{

fann_train(ann, [0, 0], [0]);

fann_train(ann, [0, 1], [1]);

fann_train(ann, [1, 0], [1]);

fann_train(ann, [1, 1], [0]);

}

for (var a=0; a<2; a++)

for (var b=0; b<2; b++)

{

var result = pop fann_run(ann, [a, b]);

debug "$a XOR $b ${round(result)} [${fmt("%5.03f", result)}]";

}

fann_destroy(ann);

=========================

/* Trains the network with the backpropagation algorithm.

*/

FANN_EXTERNAL void FANN_API fann_train(struct fann *ann, fann_type * input,

fann_type * desired_output)

{

fann_run(ann, input);

fann_compute_MSE(ann, desired_output);

fann_backpropagate_MSE(ann);

fann_update_weights(ann);

}

 

FANN library 2.0.0

https://www.mql5.com/en/forum/178276/page13

http://leenissen.dk/fann/

Type: fann_type

fann_type is the type used for the weights, inputs and outputs of the neural network.

fann_type is defined as a:

float - if you includefann.h orfloatfann.h

double - if you include doublefann.h

int - if you includefixedfann.h (please be aware that fixed point usage is

only to be used during execution, and not during training).

==================================

struct fann_train_data

{

enum fann_errno_enum errno_f;

FILE *error_log;

char *errstr;

unsigned int num_data;

unsigned int num_input;

unsigned int num_output;

fann_type **input;

fann_type **output;

};

 

Mt4 fann dll 2.0.0

Files:
mt4fann.rar  23 kb
mt4fann2.mq4  3 kb
fann200_1.gif  13 kb
 

FANN is not so good

Dear Barnix,

You are doing great, I personally respect your work and your attitude, allow me to concisely mention a few points in my opinion and experience:

1- FANN is not that good in noisy input space, I have tested it against SVM and BackPropagation, it is the worst.

2- Back Propagation with Momentum and Learning Rate is better than SVM (in both regression and classification), as long as SVM is very sensitive to input scaling (scaling 0 -> 1 is better than -1 -> 1 which is better than no scaling at all), it might not be for this kind of problems because of many reasons (we may discuss later).

3- In your input space, i see code which I think it must get you in trouble, look at this piece of code in init_train_bars() function:

for(int i=0;i<NR_OF_SAMPLE_LINES;i++)

{

ma5_M5=iMA(NULL,PERIOD_M5,5,0,MODE_SMA,PRICE_CLOSE,i);

ma5_H4=iMA(NULL,PERIOD_H4,5,0,MODE_SMA,PRICE_CLOSE,i);

....

}

suppose NR_OF_SAMPLE_LINES=2000, you are feeding the network with MA data on M5 TF for 2000 bars which equals to 2000*5/60 = 166 hours (7 days) of MA data on M5 TF ;; and also feed it with 2000 bars of MA on H4 TF which equals to 8000 hours (330 days).

I think the network will not do well, feeding it with 330 days old data will create an unrelated input vectors and cause you problems.

I have another ideas too (after long time of research), but they are not suitable for SVM model, they need BP with Momentum approach.

Regards,

 

Yes you are right, but when take a decision in real life for taking a position

you analyze for example 2000 bars M5 2000 bars H1 and 2000 bars H4

on mt4 terminal.

You make something like an SVM you make a classification on backward data.

adimsh:
Dear Barnix,

You are doing great, I personally respect your work and your attitude, allow me to concisely mention a few points in my opinion and experience:

1- FANN is not that good in noisy input space, I have tested it against SVM and BackPropagation, it is the worst.

2- Back Propagation with Momentum and Learning Rate is better than SVM (in both regression and classification), as long as SVM is very sensitive to input scaling (scaling 0 -> 1 is better than -1 -> 1 which is better than no scaling at all), it might not be for this kind of problems because of many reasons (we may discuss later).

3- In your input space, i see code which I think it must get you in trouble, look at this piece of code in init_train_bars() function:

for(int i=0;i<NR_OF_SAMPLE_LINES;i++)

{

ma5_M5=iMA(NULL,PERIOD_M5,5,0,MODE_SMA,PRICE_CLOSE,i);

ma5_H4=iMA(NULL,PERIOD_H4,5,0,MODE_SMA,PRICE_CLOSE,i);

....

}

suppose NR_OF_SAMPLE_LINES=2000, you are feeding the network with MA data on M5 TF for 2000 bars which equals to 2000*5/60 = 166 hours (7 days) of MA data on M5 TF ;; and also feed it with 2000 bars of MA on H4 TF which equals to 8000 hours (330 days).

I think the network will not do well, feeding it with 330 days old data will create an unrelated input vectors and cause you problems.

I have another ideas too (after long time of research), but they are not suitable for SVM model, they need BP with Momentum approach.

Regards,
 

FANN SCALE DATA

====================

/*

* INTERNAL FUNCTION Scales data to a specific range

*/

voidfann_scale_data(fann_type ** data, unsigned int num_data, unsigned int num_elem,

fann_type new_min, fann_type new_max)

{

unsigned int dat, elem;

fann_type old_min, old_max, temp, old_span, new_span, factor;

old_min = old_max = data[0][0];

/*

* first calculate min and max

*/

for(dat = 0; dat < num_data; dat++)

{

for(elem = 0; elem < num_elem; elem++)

{

temp = data[dat][elem];

if(temp < old_min)

old_min = temp;

else if(temp > old_max)

old_max = temp;

}

}

old_span = old_max - old_min;

new_span = new_max - new_min;

factor = new_span / old_span;

for(dat = 0; dat < num_data; dat++)

{

for(elem = 0; elem < num_elem; elem++)

{

temp = (data[dat][elem] - old_min) * factor + new_min;

if(temp < new_min)

{

data[dat][elem] = new_min;

/*

* printf("error %f < %f\n", temp, new_min);

*/

}

else if(temp > new_max)

{

data[dat][elem] = new_max;

/*

* printf("error %f > %f\n", temp, new_max);

*/

}

else

{

data[dat][elem] = temp;

}

}

}

}

/*

* Scales the inputs in the training data to the specified range

*/

FANN_EXTERNAL void FANN_APIfann_scale_input_train_data(struct fann_train_data *train_data,

fann_type new_min, fann_type new_max)

{

fann_scale_data(train_data->input, train_data->num_data, train_data->num_input, new_min,

new_max);

}

/*

* Scales the inputs in the training data to the specified range

*/

FANN_EXTERNAL void FANN_APIfann_scale_output_train_data(struct fann_train_data *train_data,

fann_type new_min, fann_type new_max)

{

fann_scale_data(train_data->output, train_data->num_data, train_data->num_output, new_min,

new_max);

}

/*

* Scales the inputs in the training data to the specified range

*/

FANN_EXTERNAL void FANN_APIfann_scale_train_data(struct fann_train_data *train_data,

fann_type new_min, fann_type new_max)

{

fann_scale_data(train_data->input, train_data->num_data, train_data->num_input, new_min,

new_max);

fann_scale_data(train_data->output, train_data->num_data, train_data->num_output, new_min,

new_max);

}

 

PNN XOR Example

Files:
pnn_train.mq4  6 kb
pnn_test.mq4  6 kb
pnn_1_1.gif  22 kb
 

fapturbo 36c statement eurgbp m15

 

fapturbo 36c statement eurchf m15

 

It also opens SHORT positions if you change "==1" to "!=0" in the following line:

else if (class == 1) {

Hope this doesn't have any negative effect on it.

Has anyone already made profit in forward tests with this one?

Reason: