Market etiquette or good manners in a minefield - page 19

 
paralocus >> :

Here's more to the point:

1. Another option to infect the system is to introduce an additional random input into a neuron or group of neurons - the organ.

2. "Organ" can be represented as a specialized group of neurons with one universal feedback - i.e. each organ neuron "knows" what is in the output of any other neuron of its group( organ or family), and each group is aware of what is in the output of the organ. Such a NS will be capable of dynamic self-adaptation and the need for learning will be one of its dominants - i.e., the system can purposefully and self-motivatedly seek and generalize the knowledge it needs. Our task will be to erect obstacles for it and scatter bits of knowledge here and there -:)

A dead end -- if everything is done correctly (in the organisation and training of the network), then after a certain number of training iterations the body will be isolated.


Another thing -- "infecting" the learning process by introducing a relatively small random component to the delta rule, this will increase the learning rate in some cases, and also effectively get out of local minima. This is already a proven method.

 
TheXpert >> :

A dead end -- if everything is done right (in the organisation and training of the network), then after a certain number of training iterations the body will be isolated.



You don't do it right...

An organ is an organ so that the liver does not interfere with the function of the spleen. The network will not isolate all organs as this is a critical entropy reduction for it - "death".

Well, if anything falls off, it was unnecessary.

 
paralocus писал(а) >>

The main theme of this thread I've already had a chance to appreciate! -:) You're a genius and I'm not kidding!

I have an idea. Quite possibly a fresh one. Last night, I had a "short circuit"... on all levels of my personal neural network.)

The thing is, I have been studying man all my life, and not only in the context of his social and personal realization - for all this is "surface" - but as a holistic phenomenon of being and a "vessel of consciousness". Today, in one night, all that has been accumulated over many years has been systematized (self-organized) from a simple ordered collection of facts and assumptions into an integrity in its own right.

I can't hide my excitement! Oh, well... that was a lyrical digression.

The idea is simple:

To increase the robustness of NSs of any scale or purpose, you have to try to infect them... infect them. A virus is certainly fatal for deterministic logic of a turing machine - for NS and artificial intelligence, with proper, "dosed" application it may turn out to be just "live water". Now let's talk about it one by one:

1. All living organisms are the essence of neural networks. The statement may seem too bold, but it is a phenomenological fact.

2. All living organisms are placed in an aggressive environment for the purpose of learning - we call it evolution. We need only remember that along with the evolution of forms, there is an ongoing evolution of individual consciousnesses embodied in those forms. Consciousness itself is an effect of the complexity of the system(neural network), and its evolutionary "Planck" - :), I assume - is the ratio of the complexity of the system to the entropy of the system.

3. Systems whose entropy has fallen below a certain limit die out because they are incapable of further evolution; however, systems whose entropy has risen above a certain limit also self-destruct. Hence, the conclusion: in order for a system to evolve successfully, its entropy should periodically, for a certain period of time, reach the limit of its permissible values in the given system. Such a state of affairs we call a "disease". By saying the word "disease" I mean it in rather a broad sense - a quite healthy-looking criminal is a sick man. Only it is not his body that is sick, but his mind and the pain he receives, mostly not in the form of fever or flu, but in the form of a so-called "heavy cross", "fate" and so on. However, this "social" pain they get is a kind of teaching influence of the evolutionary continuum - raising the creature's entropy to hardly bearable limits. This raises a philosophical question about the teacher and his aims... which, however, is far beyond the scope of our forum discussion -:)

4. those who survive - have developed immunity - in the broadest sense - i.e. not only against pathogenic germs and social, but what is even more important for evolution - transactional external and transactional internal.

5. In any living system there are such "germs" that will surely kill it if its immunity is weakened enough. Why did nature do this? Exactly for the purpose of increasing the ability of the same system to resist the factors of the environment by constant internal "training" of the system for survival and, consequently, to have more opportunities (time) for continuing the individual evolution.

6. Let us assume that the task of an evolving system is to develop immunity (in all senses). Then an interesting thing turns out: the number of inputs of living NS as well as the number of outputs (even less) is ridiculously small in comparison with the number of their neurons and connections! I.e. we sharply increase the number of neurons in the intermediate layer (if there are three layers - input, hidden, and output), and now we can try to "infect" the NS. This can be done by introducing a metered randomized error during correction of weights! And going a little further, alternative training of the NS by increasing or decreasing the frequency or amplitude of this randomised error is possible.

For example, before correction of weights we could try to add a small error to the corrector with a function that (randomly) once every 1000 calls to it would return a random value from a certain range (e.g. +0.01 / -0.01 ). It is not known when or which neuron will get a small erroneous increment. The more often such increments occur - the higher the entropy of the system. In this case the NS will have to take into account... its own error!

Here is another important point:

1. One more variant of system contamination - introduction of an additional random input into a neuron or a group of neurons - an organ.

2. "Organ" can be represented as a specialized group of neurons having one universal feedback - i.e. each organ neuron "knows" what is at the output of any other neuron of its group( organ or family), and each group is aware of what is at the organism output. Such a NS will be capable of dynamic self-adaptation and the need for learning will be one of its dominants - i.e., the system can purposefully and self-motivatedly seek and generalize the knowledge it needs. Our task will be to put up obstacles and scatter bits of knowledge here and there -)

+5

I've been thinking about something like that myself. A lot of interesting and non-trivial dependencies open up when working with AI.

For example, not long ago I realized why we need dreams... It turns out that during sleep, our brain exercises synapses by experiencing what we have seen before, thus eliminating their inevitable dystrophy (it is a biological object, in which exchange processes are constantly going and errors are accumulated). If we had no sleep, we would have lost all cognitive skills and long-term memory in a year! - We would have been reduced to simple things that can only remember what they see. Powerful experiences (related to life-changing events) haunt us in our sleep all the time, thus cementing useful knowledge with an axe.

 
Neutron >> :

Powerful experiences (related to life-changing events) haunt us in our dreams all the time, thus cementing useful knowledge with an axe.

Well that's manageable. The context of learning does not have to be negative. For a system that has "grasped" what is expected of it and "accepted" this goal as its own (as its primary purpose = the meaning of life) dreams cease to bear the mark of nightmares and in dreams learning can be continued at very high speeds.

 
Neutron I think that if we completely switch to learning by manipulating the entropy of the system, the local minima will disappear as a class. However, the training may require much more epochs and not every grid will be able to complete it. But the ones that can... I can't even imagine what they'll be capable of.
 
Neutron >> :


Neutron, I would still like to talk about whitening the inputs, and about spreading the error to the next levels too

 

Hi, paralocus.

I'm currently messing around with Matkad's Zig-Zag - (somewhere in my head is a glitch) and at the same time I'm normalizing the input data for the NS. Here is what I got last time: Suppose we have input data with arbitrary distribution of increments, defined on the whole number line. We need to find an algorithm for mapping this distribution to a +/-1 range with a shelved probability density function (SP) distribution.

Let's take the EURUSD 1m series as an example and plot the SP distribution of the difference d[i]=Open[i]-Open[i+1] fig. on the left:

A good exponential distribution has been obtained and we will convert it into a unit shelf. To do this we construct the PDF of SP by simply finding the commutative sum from SP (Fig. right) and fitting each branch to 1 taking into account the sign of the branch (having previously shifted the curve by its corresponding value to "zero" at the maximum of the SP distribution). We obtained a sigmoid-like pattern. Now, we take the initial series of increments and act on each of them with our sigmoid as an operator, which will map them to a unit shelf. To do this, I simply substitute the increment value d[i] as an argument of the resulting sigmoid:

The result is something not exactly shelf-like, but close to it. Compare it with the original distribution. The spacing in the centre of the resulting distribution is inevitable, because we have to stretch somewhere to make it thicker. I think it's a perfect input cocktail for NS.

P.S. I wonder why it didn't make a perfect shelf. Is it fundamentally impossible and a limitation of the method, or am I missing something in the construction?

 
Yay! I was afraid you'd suddenly lose interest... -:)

I'll get to the bottom of what you wrote and I'll answer it.

 
Neutron >> :

P.S. I wonder why there is no perfect shelf. Is it fundamentally impossible and a limitation of the method, or am I missing something in the construction?

I also thought about it yesterday... (of course not as mathematically as you - I just can't do it that way) in general, I have a suspicion that there is a reason for it! So you're not missing anything.

There is something special about properties d[i]=Open[i]-Open[i+1]. There is an intuitive hunch that in this case we have a fractal BP as input, which is carefully influenced by a continuous operator (e.g. th(x) or sigmoid), so an ideal shelf will not work - the probability distribution Open[i]-Open[i+1] is most likely Hurstian. I have a rougher one, so the middle - 0 is missing altogether. By the way, why do you take Orap?

 

I don't like fiddling with something that hasn't been formed. It's a habit from Matkad, it's like two fingers on the pavement to "look" into an unintended future when testing the TS! The only guarantee against this is formed bars, or opening prices. They certainly don't bounce.

Give me a hint, what is your problem with correlation of input signals? What do you use for input and why do you think the problem exists? After all, it's easier to make sure it doesn't exist than to solve it :-)

Reason: