A quick and free library for MT4, much to the delight of neuralnetworkers - page 9

 
This is a committee of 16 grids, you can look at the files that are created.
 
Henry_White >> :

If we have a grid with one output neuron, how do we get 16 outputs...!? Or is it a committee of 16 nets?

It is.

 

Hello all!

I've been working on this piece of code for two nights now.

void ann_prepare_input () {
    int i;
    double res = 0;
	 for( i = 0; i < AnnInputs; i++) {
      res = (iRSI(Symbol(), 0, 30, PRICE_OPEN, i) - 50.0) / 50.0; 
      if (MathAbs( res) > 1) {
         if ( res > 0) {
            InputVector[ i] = 1.0;            
         } else {
            InputVector[ i] = -1.0;            
         }
      } else {
         InputVector[ i] = res;            
      }
    }
}

I don't understand what sense is there in

if (MathAbs( res) > 1) {
         if ( res > 0) {
            InputVector[ i] = 1.0;            
         } else {
            InputVector[ i] = -1.0;            
         }
      } else {
         InputVector[ i] = res;            
      }

if the res value cannot be greater than one.

Please explain this point, if it's not a secret, of course.

 
alex_r >> :

Hello all!

I've been sitting up all night poking around in a piece of code

I don't understand what sense is there in

if the res value cannot be greater than one.

Please explain this point if it's not a secret.

Normalized values (1;-1) should be fed to the input of the NS. Otherwise NS training may lead to undefined results.

 

well, how res is calculated

condition

(MathAbs(res) > 1)
will never be met and the input will be
res = (iRSI(Symbol(), 0, 30, PRICE_OPEN, i) - 50.0) / 50.0
and here, in 99.99% of cases, you will naturally get anything but 1 and -1.
if so
for( i = 0; i < AnnInputs; i++) {
      res = (iRSI(Symbol(), 0, 30, PRICE_OPEN, i) - 50.0) / 50.0; 

         if ( res > 0) {                 
            InputVector[ i] = 1.0;            
         } else {
            InputVector[ i] = -1.0;            
         }        

    }

then only 1 and -1

or am I wrong?

 
alex_r >> :

>> or am I wrong?

Wrong. Everything works correctly here. You should study the code more closely. It is very simple and obvious. I don't understand the sense of your "//" - it "kills" the meaning completely.

iRSI basically gives values from 0 to 100 with rare outliers outside the range (that's why it's (MathAbs(res) > 1) ), so res will in 99.9% of cases lie within 1 and -1, while InputVector will lie within 100%.

If you're waiting for a breakdown of each operator, I don't think anyone here has the ability to give programming lessons. And for this section it will be offtopic. Perhaps with this you should apply to another section of the forum or drop me a line.

PS. Don't "comment" on someone else's code (better to comment). It can mislead others and just a bad tone imho. If you don't agree with author - write your variant.

 

Removed the commented.

Now I explain, at res>0, here 0 corresponds to level 50 of the RSI indicator, we assign 1, otherwise we assign -1

What is not clear? A minimum of code and nothing else.

As it is written in the source code, the main condition of data normalization is NOT fulfilled.

The only thing is to filter another zero, but in this case it is not that important.

 
Your variant will give either -1 or 1. And what is all this for? And how are you going to train a network with THIS? Or rather, WHY? What is the usefulness of this "binarity" of states? How do you build a pattern from this to train the network?
 
Henry_White >> :

Normalised values (1;-1) must be fed to the NS input. Otherwise, training the NS may lead to uncertain results.

Maybe you were confused by this post. Here I meant range, not binary states.

 
alex_r >> :

As written in the source code, the main condition for data normalisation is NOT met.

You are mistaken because the grid inputs (layer 1) are sigmoids set to range: -1;1. Therefore, any value on the inputs in the range -1 to 1 is a necessary and sufficient condition for normalisation.


What you try to construct is to replace the sigmoid function with a custom function Signum(input) which underestimates the discreteness of input data and creates situations in which the training set contains a large number of mutually contradictory data.

Reason: