Market etiquette or good manners in a minefield - page 28

 
YDzh писал(а) >>

Neural network, 13 inputs, no hidden layer. Genetic algorithm training

Awesome, YDzh!

My results are much more modest. You need to put it on a demo and see where the grid slices off.

paralocus wrote >>

Neutron, looks like you were right about the 25 readiness counts... -:)

Something about my net is not learning. After 100 epochs the weights are practically the same as the network was initialised with.

On a related note, another silly question:

Is the learning vector the same in each epoch or not?

Anyway, it turns out that the ratio of accumulated correction to accumulated squared correction tends to zero very quickly. That's why already after the 10th iteration the learning practically stops.

Probably, there is an error in the code. I need to search for it. For this reason I first race the grid in Matcadet, it is convenient to trace all the dynamics of learning at any level of detail and it is much easier to compile statistics. To compare results of learning, I get independent statistics from 100 experiments (on teachability and prediction) and compare only the status averages.

As for the training vector, it's certainly different at each epoch. But it's different in a special way - the same one, but shifted to the left each time by one step, and the zero element takes the place of a new datum, etc.

P.S. I'm investigating the BP breakdown now with a uniform step, so it turns out that from epoch to epoch it's better to re-randomise all weights and completely retrain the mesh, rather than retain the acquired knowledge. Perhaps this is a peculiarity of the input data used. I want to stress, paralocus, how important it is to self-check everything in combat - do the weights grow slowly? - Just put a constant coefficient = 10 in front of the calculated sum and make sure it doesn't go wrong!

 
Neutron >> :

There must be an error in the code. We need to look for it. I first use Matcadet for this grid, it is convenient to trace all the dynamics of learning with any detail, and it is much easier to compile statistics. To compare results of learning, I get independent statistics from 100 experiments (on teachability and prediction) and compare only the status averages.

As for the training vector, it is of course different for each epoch. But it differs in a special way - it is the same vector, but shifted to the left by one step, and the zero element is replaced by a new datum, etc.

I've been digging the code since yesterday. I seem to have cleaned everything, double-checked it, rewrote part of it to simplify it. Now everything writes and reads exactly as it should.

But do I need to zero out the correction vector after each epoch? I think that is the reason. I understand about shifting the vector one step forward - I'm doing it.

 

Well, of course you do!

All counters are reset to zero by the start of the new training (epoch).

 

Is Mathcad hard to learn?

Even though I'm scared of it, I think I'll have to get to grips with it...

 
Neutron >> :

I want to stress, paralocus, how important it is to check everything yourself in battle - are the weights slow to grow? - So put a constant coefficient = 10 in front of the sum you get and just make sure they don't break!

The need for self-checking is self-evident to me, although the growth of weights in this implementation has not yet observed. About where to put 10 - have not understood yet.

By the way, if you rang the weights at the beginning of each epoch it would amount to using just one epoch. It is very similar to the truth, since the error vector increment becomes negligibly small for N epochs, and we do not have extra resources of a personal computer.

 
Neutron писал(а) >>

Cool, YDzh!

My results are much more modest. You need to put it on a demo and see where the grid cuts off.

I have one trivial problem with it - I don't have a computer that's always on... I should try to use shorter timeframes, otherwise the error analysis will take more than half a year :)

 
paralocus писал(а) >>

Is Mathcad hard to learn?

...

the easiest programming language, some people don't even consider it a language. Most of the time you see a formula written in a book and write it in Matcad.

the only thing you have to remember is that matcad is a matrix language. Even a prime number in matcad is a matrix. The only thing to remember is that even a prime number is a matrix.

 
Prival >> :

is the easiest programming language, some people don't even consider it a language. Most of the time you see a formula written in a book and write it in Matcad.

The only thing you have to remember is that matcad is a matrix language. Even a prime number in matcad is a matrix. I consider it (matcad) the pinnacle of evolution in programming languages.

Thanks Prival !

>>: Good to see you! -:)

 
paralocus писал(а) >>

Is the learning vector the same in every era or not?

I was wrong in answering your question in the previous post - I was referring to the forecast of a new epoch, not a new epoch. Within training on a new bounce we have only one training vector and a hundred training epochs with correction of NS weights in each epoch!

A new countdown is received - the training vector changes and we run a hundred training epochs on it again, and so on.

Sorry. I'm already getting confused myself.

paralocus wrote >>

Is matcad hard to master?

Even though I'm terrified of it, but I think I'll have to figure it out...

No, it's easy. Get yourself a 2001i Pro.

 
paralocus писал(а) >>

Thanks Prival!

Good to see you! -:)

Yes, I read this thread very carefully all the time. But I don't understand a lot of it because of the terms. Synapses, epochs ...etc.

It takes time to understand, it is better to do it with a teacher (it will be faster). But so far I am not able to do it. I am preparing data for verification of the idea, I think her idea to check through the National Assembly. Then the time will come to understand what and how to program in it. For now I only know (I think I know) what data it should be fed with and what it should be trained on.

I'm working with Matkad 14 and it has some more handy features than 2001i.

Reason: