Market etiquette or good manners in a minefield - page 67

 
Neutron >> :

Yes, what can I say? Go ahead!

Read Pastukhov. His PhD thesis.

You'd be surprised.

I can't find it. Can you give me a link, I'd like to review this work. Though usually I do not read dissertations as it is clear what they are written for, but nevertheless, I want some food for thought, for creative crisis at me:)

 

Of course, the only really smart people are the ones who haven't written a dissertation! Only they do real things in this world. And those who have written this work even once in their life are not guys after that - obvious suckers, bataniks and fools! And TV and mobile phones were invented by Dudes and Chicks... or not, they only use these cell phones and don't watch TV at all... I'm confused. And the fact that you can't build an aeroplane without knowing the strength of a plane, is that all right? And you can't write a resilient matrix without mastering it, and that's a dissertation...

So, you don't read dissertations not because it's clear what they are written for, but because you don't understand what they say!

No offense, okay?

 

So, we take a single layer with the parameters:

K=1, d=24+1,S(learning rate)=0.01 We can see that it learns well:




I.e. as far as I understand, the quality of learning of the network can be estimated by this graph as follows: the smaller the dispersion, the smaller the final value of average errors and the closer these errors are to each other - the better.

The way such a grid works is as follows:


I.e. you can see that it's not very... Increase the learning speed: S=0.7


It looks better, but the learning statistics looks suspicious:


Questions:

1. Question: 1. What are the criteria for assessing the learning (teachability) of the network?

2. Can the example above be considered a fitting?

3. You said that the value of K is a subtle issue. The graph below shows network learning at K < 1, which looks more attractive:

1:


Could it be that the magnitude of the training vector for market-type BPs is not a constant value ?


And lastly, is my learning done here?

 
Neutron >> :

Of course, the only really smart people are the ones who haven't written a dissertation! Only they do real things in this world. And those who have written this work even once in their life are not guys after that - obvious suckers, bataniks and fools! And TV and mobile phones were invented by Dudes and Chicks... or not, they only use these cell phones and don't watch TV at all... I'm confused. And the fact that you can't build an aeroplane without knowing the strength of a plane, is that all right? And you can't write a resilient matrix without mastering it, and that's a dissertation...

So, you don't read dissertations not because it's clear what they are written for, but because you don't understand what they say!

No offense, okay?

No offence. But where's the link?:)

 
registred писал(а) >>

No offence. But where's the link?)

Pinged it in my personal message.

Fills it up!

To paralocus

I'll have to think about it.

 
By the way, I don't understand what is written in the dissertation either, apart from the most general provisions, although I honestly tried to understand it... sad.
 
Neutron >> :

Pinged it to me in a private message.

I must have missed it. There's nothing there.

 
Neutron >> :

Of course, the really smart people are the only ones who haven't written a dissertation! Only they do real things in this world. And those who at least once in their life wrote this work - not guys after that - obvious suckers, bataniks and fools! And it is impossible to write the sopromat without works in the given area, and it is already dissertation...

The Russian mathematician Euler, my idol, wrote about 800 papers, each of which is as important and novel as a modern dissertation.

 
paralocus писал(а) >>

1. What are the criteria for assessing the learning capacity of the network?

2. Can the example given be considered a fit?

3. You said that the K-value is a subtle issue.

The K-factor is really not as simple as I expected. Moreover, Ezhev got an estimate for the optimal length of the training vector, as a function of the number of weights and inputs of the NS. He obtained it by solving the entropy minimization problem in the feature space of NS. He did not give a solution, gave only a ready answer P=w*w/d. In fact I see the following:

This is the learning dynamics of a two-slice. The first has P=d, the second P=w... the result is the same and taking into account the dependence of the learning speed on the length of the training vector P, the first variant looks more attractive! In short, I don't understand...

If only we could get hold of mathemata and ask it to solve the problem correctly. That would be awesome! And so we will have to collect experimental data, i.e. to move in an empirical way. Looks like the optimum of the training vector length lies near the condition P=d.

A good criterion of estimation of network workability is its operation in MTS kernel, but at the stage of debugging we can use the estimate of yield offered above - the product of the slope tangent by the instrument volatility. Here is, for example, what this estimation looks like as a function of entry dimension for a single layer:

This is the average result from a series of 500 experiments. You can see that with an entry dimension of 10, this girl has statistically outperformed the spread! If this is confirmed in another sample of quotes (a month later, for example), you can safely trade. Bring your results to this view. Let's discuss. Then we'll overlay the double-layer data, see the difference...

As for learning, I'm sure I'm learning myself thanks to these conversations with you and forumers participating in the dialogue, and the process is endless... I hope.

registred wrote >>

Missed it. There's nothing there.

What, there's no topic. Or just a file?

 

No. There's nothing at all in the personal ones.

Reason: