Hybrid neural networks. - page 7

 
joo >> :

Actually, I invented it when I was in the 3rd or 4th grade. When do you extract roots? Here, I was doing square roots, cube roots... But on a notebook paper with squares in it.

Tested it. The results are really impressive.


Voice (brag).
 
IlyaA >> :

Didn't the task I described touch you? It's kind of all over the place. No way. Most radio electronic equipment is built on this principle.

We don't extrapolate or average, we isolate. Noise integration works here.

The point of learning is not to isolate by sifting out some of the special features that occur and then remembering them. And it's not even that we can't sift out the features.

I outlined the principle of learning in one of the threads:

Learning is the process of acquiring the ability to generalise, to classify, to abstract, to be able to draw conclusions.

By what means is another question.

IlyaA wrote(a) >>.

How do people learn? :) They read one topic, then another. Each topic is studied individually. Then they generalize. And this way, your grid will rote the picture and not generalize anything. It will become highly specialized.

Read above.

IlyaA wrote(a) >>.

Don't read too many books. What do you suggest? Watch TV and bang your head against the wall?

The meaning is deeper than it seems, or rather it seems that there is no meaning at all in these words. The point is to think, to reason, to draw conclusions, not to memorize.

IlyaA wrote(a) >>.

Sound it out (brag).

I do not need it. There was a question, there was an answer.

 
IlyaA >> :


Oh yes, the network at first stages is full-bonded, well, or like convolutional networks, but there are many layers). And all this happiness is multiplied by 10 and starts mating. Each of them has to be processed, i.e. we have 10x. And if you have an idea to teach a profitable trick, then I have to calculate all the time interval for each generation and run it through each progeny. This operation was very resource intensive, so I'm going back to my original question.

Why not use RProp? In the case of genetics it gives significant speed up of calculations.

 
joo >> :

The point of learning is not to isolate, by sifting out some of the special features that occur and then remembering them. It is not even that we cannot sift out the features.

I outlined the principle of learning in one of the threads:

Learning is the process of acquiring the ability to generalise, to classify, to abstract, to draw conclusions.

By what means - that is another question.


There is to me that we have begun to philosophise and I propose to end the discussion on this issue with a "sticking to our guns" note.
 
rip >> :

Why not use RProp? In the case of genetics, it gives a significant speedup of the calculations.


I agree it is faster, as is gradient descent. The difference is not much. The point of using genetics is that the probability of finding a GLOBAL extremum approaches 1. No gradient will show this (correct me if I'm wrong). Additionally the surface of an optimized hyperplane is riddled with an infinite number of local extrema with significant amplitudes. But more and more neurons add more oil to the fire - the hyperplane becomes even more intricate. Under such conditions the gradients converge, but as I wrote above the probability of finding a global extremum is 50-80%.
 
IlyaA писал(а) >>

I agree it is faster, so is the gradient descent. The difference is not much. The point of using genetics is that the probability of finding a GLOBAL extremum is close to 1. No gradient will show this (correct me if I'm wrong). Additionally the surface of an optimized hyperplane is riddled with an infinite number of local extrema with significant amplitudes. But more and more neurons add more oil to the fire - the hyperplane becomes even more intricate. Under such conditions the gradients converge, but as I wrote above the probability of finding a global extremum is 50-80%.

Do you have concrete results from trading with this system? >> Is it worth the effort?

 
IlyaA >> :


I agree it is faster, so is gradient descent. The difference is not much. The point of using genetics is that the probability of finding a GLOBAL extremum is close to 1. No gradient will show this (correct me if I'm wrong). Additionally the surface of an optimized hyperplane is riddled with an infinite number of local extrema with significant amplitudes. But more and more neurons add more oil to the fire - the hyperplane becomes even more intricate. Under such conditions the gradients converge but as I wrote above the probability of finding a global extremum is 50-80%.

I agree that the gradient does not provide 100% convergence of the learning algorithm.

I only use GAs to obtain a new network topology. On average RProp shows reaching a local minimum in 100-200 epochs.

After that, the most prolific individuals are found and a new population is formed. Mutation. RProp.

 
rip >> :

I agree, the gradient does not provide 100% convergence of the learning algorithm.

I only use GAs to obtain a new network topology. On average RProp shows reaching a local minimum in 100-200 epochs.

After that, the highest performing individuals are found and a new population is formed. Mutation. RProp.


The combination means. Gentlemen, let me congratulate everyone! We've just just justified the name of the branch. It's an idea! Here's what came to mind. Mutations in genetics change 20-40% of the scales in small increments. Is there a high probability that the offspring will return to their parents' habitat?
 
FION >> :

Do you have concrete results from trading with this system? >> Is it worth the effort?


There are no concrete results. Only ideas and predictions. All the perceptrons I've made have not survived to the indicator stage. I rejected them. :( The best idea at the moment overcomes the algorithm's resource-intensiveness. But Vera lives on (Nadya and Lyuba too :).
 

Question.

Who implemented the Takagi-Sugeno-Kanga Fuzzy Networks?

Reason: