Market etiquette or good manners in a minefield - page 50

 
Prival >> :

If I knew what kind of example I wanted, I wouldn't have asked. Something simple on the matadec. Preferably with explanations of what epochs are, etc. I don't understand many terms, so the meaning of what you are doing often slips away.

I once saw an example in a textbook how a network is trained on a sine wave, something like that. If it's not too much trouble.

I'll post it now. I will add comments so that it is clear where and what comes from.

Done. Check it out in your inbox.


to Neutron

I'm sitting in the middle of a two-ply. I've been doing some digging along the way...

I think I know why my single-layer works like this - that is, it doesn't work like yours.

Take a look at this:


Now I don't even know whether to consider it a mistake(for a single layer) -:) What do you think?

 
The double layer is not working yet. Some kind of exception I can't seem to catch...
 

paralocus wrote(a) >>.

Now I don't even know whether to consider it a mistake(for a single layer) -:) What do you say.

Get the hell out of the indexes!

You should compensate this error manually, but it's just too shabby. That's why it's better to do it right away.

One more thing. Here's an expression for correctly deriving the learning error:

In other words, you first obtain the sum of squares of errors of the whole training sample within one epoch and divide this expression by the square of the training vector scatter (normalization). It is done to avoid getting linked to the number of epochs or a specific architecture. It will be easier to compare results of NS training. It turns out that if the obtained value is <1 then the network is trained, if not, then the best prediction is to throw it in the trash and go to sleep.

 
Got it, and I was just accumulating the error module for the whole epoch and then dividing it by the epoch length. I'm currently working on a double layer
 

But I don't understand why the indices have to be removed? I think it's just that the correction square is not adding up correctly.

I mean, it has to be like this:


What did you mean by that?

 

to Neutron


Serega, explain the concept. You will charge your NS for the forecast of some value (Close, (H+L)/2, bar colour, ....) which is expected on the next count (i.e. forecast one count forward)? Did I get it right or something else?

 
paralocus писал(а) >>

But I don't understand why the indices have to be removed? I think it's just that the correction squared up the wrong way.

Why do you need indexes? You accumulate the correction (not its square, but the correction including the sign), indexes are not needed. Then normalize to the sum of squares at the root (again no indices) and you get the desired value of the correction for a given epoch.

grasn wrote >>

to Neutron

Serega, enlighten me conceptually. Will you use your NS to predict some value (Close, (H+L)/2, bar colour, ....) which is expected in the next consecutive timeframe (i.e. the prediction one timeframe ahead)? Did I get it right or something else?

Yes, I only forecast one step ahead and then retrain the grid. I predict the direction of expected movement, not its magnitude or time duration.

 

But the correction, I accumulate for each weight personally, i.e. it will be different for different weights included in the neuron (I think that's how you explained it, let me check)

This is what it looks like:


 

That's right!

I confused indexing by epochs with indexing by synapses. Your implementation is a little different, so I'm peeing my piss. I'm sorry!

Then what's the point of your question? What's wrong with it?

 
You reset the counters before each epoch, don't you?
Reason: