Hybrid neural networks. - page 22

 
gumgum >> :
c-means algorithm who has a detailed description?

WiKi, k-means, c-means, and google ...

What do you want to use it for?

 
rip >> :

WiKi, k-means, c-means, and google ...

What do you want to use it for?

>> Hybrid to crunch.


By the way have you tried coefficient gradients? I have division zero out!

 

Maybe someone already wrote, maybe not...


After trying several algorithms I've got a problem >>in[N/2]*in(number of training examples) to reach sufficient error level. I've initialized weights by(MathRand()-MathRand())/32767.


Initialization of weights via DoubleTostr and StrToDouble leads to + results of reaching the goal!


double ranD(int rsign)
{
string ran="0.";
for(int z=1; z<=25; z++)
{
ran= ran+DoubleToStr(MathFloor((MathRand()/32767.0)*10),0);
}
double randou=StrToDouble( ran);
if( rsign==1)
   {
   double dip=(MathRand()-MathRand())/32767.0;
   if( dip<0){ randou=(-1)* randou;}
   if( dip==0){ randou=0;}
   }
return( randou);
}


 

The topic is already overgrown with cobwebs...


I noticed this thing... Suppose we have an x-n1-n2-y neural network. Let's train it with ORO grad. in batch mode. <e a little differently.


Make a new set MG=S from the training set S and partition it into K (finite) subsets M such that intersection M(n)&M(n+1)!=0

Ignore all learning subsets M1,M2,M3,...,M(n),M(n+1) of MG set and choose M(emin) and correct error if M(emin)<e stop if not, then first we need M(emin)/M(emin)--1.


This is the best way to learn.

 
gumgum >> :

The topic is already overgrown with cobwebs...


I noticed such a thing... Suppose we have a neural network x-n1-n2-y. Let's train it with ORO grad in the batch mode. <e a little differently.


From the training set S, make a new set MG=S and partition into K(finite) subsets M that intersect M(n)&M(n+1)!=0

We skip all learning subsets M1,M2,M3,...,M(n),M(n+1) of MG set and choose M(Emin) and correct error if M(Emin)<e stop if not, then we still need M(Emin)/M(Emin)--1.


Well, this is a much better way of training.


What does the test subset show? How does the error behave?

The method described is sometimes found in the literature as a modified batch mode.

 

Comrades. If anyone has implemented stochastic learning algorithms. Share your impressions, experiences, etc. (I don't need source code).

P.S. Thanks in advance.