# sigmoid derivative formula

Hello mq , what is your sigmoid derivative formula if i may ask in the Matrix and vector libraries ?

I'm getting same sigmoid values for activation but different for the derivative .

I'f i'm doing something wrong let me know of course .(which is probably what's going on)

thank you

```#property version   "1.00"

int OnInit()
{
//---
double values[];
vector vValues;
int total=20;
double start=-1.0;
double step=0.1;
ArrayResize(values,total,0);
vValues.Init(total);
for(int i=0;i<total;i++){
values[i]=sigmoid(start);
vValues[i]=start;
start+=step;
}
vValues.Activation(vValues,AF_SIGMOID);
//print sigmoid
for(int i=0;i<total;i++){
Print("sigmoidDV("+values[i]+")sigmoidVV("+vValues[i]+")");
}
//derivatives
vValues.Derivative(vValues,AF_SIGMOID);
for(int i=0;i<total;i++){
values[i]=sigmoid_derivative(values[i]);
Print("dDV("+values[i]+")dVV("+vValues[i]+")");
}
//---
return(INIT_SUCCEEDED);
}

double sigmoid(double of){
return((1.0/(1.0+MathExp((-1.0*of)))));
}

double sigmoid_derivative(double output){
return(output*(1-output));
}```

Lorentzos Roussos:

Hello mq , what is your sigmoid derivative formula if i may ask in the Matrix and vector libraries ?

I'm getting same sigmoid values for activation but different for the derivative .

I'f i'm doing something wrong let me know of course .(which is probably what's going on)

thank you

after testing it seems the function expects the vector to have the values that would produce a sigmoid and not the sigmoid output .

It is not in the documentation so caution .

The expectation is to pass the activation values per the reception of the derivatives and not the x value that would go through the activation.

That will require networks to also retain the value pre-activation in order to request the derivative . Unless it is an error.

Here is the temporary "fix" , send the x instead of the activation(x) value .

Example :

Sigmoid(0)=0.5

Normally you retain an output vector (0.5) and you'd feed 0.5 into the derivative function (the output vector) before multiplying it with the error vector

These derivative functions expect you to send 0 (the value before the activation)

It's counter intuitive (i assume for others too) so change it or document it.

It will force workarounds that mess up all the speed gains ;)

Thank you

Code around :

```#property version   "1.00"
//+------------------------------------------------------------------+
//| Expert initialization function                                   |
//+------------------------------------------------------------------+
int OnInit()
{
//---

double values[],original_values[];
vector vValues;
int total=20;
double start=-1.0;
double step=0.1;
ArrayResize(values,total,0);
ArrayResize(original_values,total,0);
vValues.Init(total);
for(int i=0;i<total;i++){
values[i]=sigmoid(start);
original_values[i]=start;
vValues[i]=start;
start+=step;
}
vValues.Activation(vValues,AF_SIGMOID);
//print sigmoid
for(int i=0;i<total;i++){
Print("sigmoidDV("+values[i]+")sigmoidVV("+vValues[i]+")");
}
//derivatives
vValues.Assign(original_values);
vValues.Derivative(vValues,AF_SIGMOID);
for(int i=0;i<total;i++){
values[i]=sigmoid_derivative(values[i]);
Print("dDV("+values[i]+")dVV("+vValues[i]+")");
}
Print("Sigmoid(0)"+sigmoid(0));
//---
return(INIT_SUCCEEDED);
}

double sigmoid(double of){
return((1.0/(1.0+MathExp((-1.0*of)))));
}

double sigmoid_derivative(double output){
return(output*(1-output));
}```
Reason: