Market etiquette or good manners in a minefield - page 36

 
HideYourRichess писал(а) >>

Any straight line can be inscribed in such a circle, at any angle to the horizontal.

Well, well!

You see, not at any angle! - It's the right angle...

 
Neutron >> :

Now it looks like it!

Where is the promised training cloud?

Put a scale grid on the graph and tell me what your tangent of the angle of inclination for the straight line is.

These were the pictures obtained from the first difference of the five-member sine by X and the grid prediction with random weights(initial setting of weights +/-1) by Y. The rest is still being done, coming soon

 
Neutron >> :

Well, well!

As you can see, not just anyone! - It's the right one...

>> Yeah, yeah, yeah.

 

There you go, it's done. There is only a training sample missing here, because training sample is vector X. So you can, of course, plot it on the graph, but it will be a straight line intersecting the graph at an angle of 45". The neuron was trained by three samples:

1. Simple first difference 5sin - vector Y_simpl, green colour (corresponding mink)

2. Hypertangent of the first difference 5sin - blue

3. binary inputs (by 5sin) - purple


The tangents of straight lines are like this:



And yet something is wrong. Or I don't understand the task you gave me.

This neuron is trained on each new data vector X, the same vector is used as the abscissa axis of the graph. If you display it on the ordinate...? Why?

In addition I noticed that if the worksheet with the same data recalculate the results will be different. Is it supposed to be like this? That is, the results are very dependent on the initial settings of weights.

Well, when I didn't train neuron at every step, but just ran it on data vector with randomly chosen initial weights, I got drawings with a cloud in the middle (sometimes).

Now you get what you see. Here's another picture.


Just in case, I'm attaching the Matkad listing

Files:
neyron.rar  114 kb
 
paralocus писал(а) >>

Well, there you have it. There is no training sample, because the training sample is vector X. So you can draw it on the graph, of course, but it will be a line crossing the graph at an angle of 45".

You have a training vector of length n on which the NS trains to recognize only one sample, n+1. We have a set of statistics N>100, which forms a training set (more exactly, a set of examples of how NS was trained) of length N samples (this is not a training vector, but a set of samples, obtained from the available sinus series, by shifting it by one sample to the right each time up to N). Thus, we have N values for which the NS was trained (these are n+1 elements of the training vectors) and we also have N values of how it learned them. Put them on abscissa and ordinate axes respectively and get the cloud that characterizes the learning process. It is clear that this cloud will not have a slope of 45 degrees. It follows from the fact that the number of weights of the mesh - w is much smaller than the optimal length of the training vector P=w*w/d and as a consequence, the system of equations that NS solves for finding the optimal weights is overdetermined and has an infinite number of solutions. Therefore, an approximation is sought which is optimal in the sense of minimizing the squared learning error for all equations. If the length of the training vector were equal to the number of weights, then one would expect 100% learning of the network, but this is not optimal in the sense of the network's ability to generalize the accumulated knowledge in the future and leads to grandiose errors on the test sample. A properly trained grid will show the slope angle of the training sample close to the slope angle of the test sample on the cloud diagram. Conversely, an over-trained network will show a 45 degree angle for the training cloud with a negligible variance and a zero angle with infinite variance for the test cloud. This is exactly the result you get if you try to solve a system of linear algebraic equations for a single perseptron accurately. I have already mentioned this above. This is the reason why it's very convenient to use a cloud diagram when debugging a network (which you can't do in MQL).

The fact that your slope changes from experiment to experiment and becomes even negative, clearly indicates that your girl lives her secret life and is not trainable. You need a lobotomy :-)

P.S. Save cdjb files with *.mcd extension for 2001i MathCad, otherwise I can't read them.

 
Here, saved as *.mcd for Matcad12 and 11. I don't have any other options for mcd files.
Files:
mathcad.rar  40 kb
 

I'll have a look now.

By the way, here's how my girl works out the 5-sine series:

It's a two-layer NS with two valid inputs (not counting constant bias), k=2, which has 2 non-linear activated neurons in the hidden layer and one linear neuron (no activation function) in the output. The training sample is shown in red and the test sample (predictions) is shown in blue. Re-training on each sample. The statistics are 200 experiments. The tangents of the angles are given in the plot field.

 

There are many things in your code that I didn't understand out of the goodness of my heart. But from what I have managed to comprehend, the following does not correspond to my ideas:

You should have exactly cumulative sum over all training epoch P by squares of deviations S[k] . What I see is just a re-designation of the variables. There is no initial initialization of the weights with random numbers.

I still don't understand what subroutine you introduced method() and ranif() and where to look for it?

I have simplified your code a bit (see file).

Files:
modif1_4.zip  16 kb
 
I just got to the computer now. That's the wrong file for you... in the morning rush to attach it. I will now comment on it and post ranif() - it is a built-in function of Matcad - it returns a vector of length specified in the first parameter with uniformly distributed random numbers in the range between the second and third parameters.
 

Commented extensively, corrected a couple of errors along the way

This is what it draws:


I attach the file in three formats 11, 12 and 13 (*.xcmd)

Now I'm looking at your corrections... Actually I should have looked at the corrections first before commenting... -:)

What you wrote two posts above I'm thinking about. The main thing for me is to understand so that the "picture" in my head formed, then I will do everything.

That picture so far turned out not to be quite right - I misunderstood "learning by doing", and there's no new one yet.

Files:
neyron_1.rar  197 kb
Reason: