neural network and inputs - page 7

 
solar:
Well for example normal banks track - what you do with your card, and if it is "uncharacteristic" purchases, the grid will block the card. The net has been working for a long time in airports, in the underground in case it is necessary to find the right "face". The net identifies the target in the case of detection and guidance. At large enterprises - as a duplicate of a person, in case of emergency. In fact, the net works in many places.

The widest.

In my mind, the standard for the grid is as follows. It is known what to teach. For example, we teach recognition of the handwritten letter "a" by presenting the grid with thousands of variants of spelling. I suspect that the deviations of all variants of the letter from the ideal form a stationary series. So there are two conditions for success: you know what to teach and the deviations from the ideal form a stationary series. It seems possible to move away from this ideal. But here a third problem enters - computational complexity.

But I am discussing the market. On the market, there are a lot of mathematical algorithms that allow a more conscious approach to the construction of TS. So it's more productive to spend time on matstatistics than on quite complex NS.

 
faa1947:

The widest.

In my mind, the standard for the grid is as follows. It is known what to teach. For example, we teach recognition of the handwritten letter "a" by presenting the grid with thousands of spelling variants. I suspect that the deviations of all variants of the letter from the ideal form a stationary series. So there are two conditions for success: you know what to teach and deviations from the ideal form a stationary series. It seems to be possible to deviate from this ideal. But here a third problem appears - computational complexity.

But I am discussing the market. On the market, there are a lot of mathematical algorithms that allow a more conscious approach to the construction of TS. So it's more productive to spend time on matstatistics than on quite complex NS.

As for recognizing letters from the alphabet (by analogy), we should assume that the market is a closed system. That will make it stationary (in your language). I.e. we should input everything we know about the market. )))))
 
solar:
As for recognition of letters from the alphabet (by analogy), it is necessary to assume that the market is a closed system ... What will make it stationary (in your language). I.e. we should input everything we know about the market. )))))


1. The market is not a closed system, but a living system, as it is formed by the opinions of people.

2. 'Stationary' is not my language.

That's it - stand down.

 
faa1947:


1. The market is not a closed system, but a living one, because it is shaped by people's opinions.

2. 'Stationary' is not my language.

That's it - stand down.

I didn't try to prove anything to you, I'm not interested ))).

return(0);

 
faa1947:

In my mind, the standard for the grid is as follows. It is known what to teach. For example, we teach recognition of the handwritten letter "a" by presenting the grid with thousands of spelling variations.

Not really, although many simple nets learn as you describe. But there are networks that are self-learning on the fly, like our brains. By the way, denying the usefulness of networks is just like denying the usefulness of our brains. And our brain successfully operates with all types of data: stationary and non-stationary. Another thing is to claim that simple textbook networks can do what our brain can do, i.e. find patterns, etc. So I am pessimistic about the usefulness of these 'simple' networks in trading. And there is not yet enough computer power for more complex networks working on biological principles.

By the way, here's an interesting video about a self-learning camera. The camera learns to recognise faces at different distances and angles once it has been shown a face. Created by a Czechoslovakian student, who later founded his own company.

https://www.youtube.com/watch?v=1GhNXHCQGsM

 
grell: It is possible not to normalise the input or intermediate signals at all, until the signal has passed through all the layers, its level will have just risen to the required range, +/-, and the output will be normalised... It goes like this.

Not normalizing is the best way to feed the data, if it is possible of course. All informativeness of the signal is preserved. There is no distortion.
 
alsu: The problem is that different inputs can have different scales.

It is better to wrestle with the scale - there is a better chance of retaining the information than to distort the information by normalisation.
 
alsu: My view is that the NS does not like unsteadiness
Well, who likes non-stationarity? Any algorithm, even the most sophisticated one, will fail when the characteristics of the input signal change. Any algorithm. Miracles don't happen ))))
 
solar:
And for the second time I ask - where do the nets operate in real time ?

The point is that the NS itself is a certain sequence of calculations with unknown coefficients to be determined in the learning process. This task in mathematics is called a regression problem (its special cases are classification and clustering). It can be performed by absolutely different algorithms, each of them has its properties and peculiarities, advantages and disadvantages. The advantage of classical NS is that it can work in the absence of a priori data about the object.

For example, we teach a network to recognize images of numbers from 0 to 9, show it pictures and teach it correct answers. If we didn't specify in the network's structure that digits will be of such a certain size, colour and so on, it has to adjust itself to input data. And, in fact, it does it - but! - slowly, and this is exactly the drawback of NS. And if first 1000 figures shown to the grid were black on white background, and then we started to show it white on black (brought non-stationarity into input line), then the grid should be trained anew.

But if we explained to the network in advance that a picture can be inverted (i.e. we described nonstationarity and put it into the NS structure, for example, by telling it that an unknown signal on -N output should be interpreted as +N), then the network won't get confused on this type of nonstationarity. But it will collapse on another one. However the network already has this in common with all other algorithms: they work best on those non-stationaries, which were put in the system by a human.

But there are differences, too: for example, many algorithms of tracking input signal parameters, which are described in the theory of optimal control, can track a wider class of non-stations than it was laid down a priori, naturally, with some limitations. NS, alas, cannot do this. Perhaps the only option for NS is quasi-stationary systems, i.e. such in which parameters float with characteristic time not less than the time of neural network training.

So, to answer the question: networks work mainly in tasks where it is impossible or very difficult to define a priori a model of input signal: recognition tasks, semantic analysis (text, sound, image, ...), cognitive (solve captcha) and their combinations. But complex tasks require a BIG (literally) and complex network, such as this (and this, by the way, is at the limit of current technology, check it out).

 
LeoV:
Well, who likes non-stationarity? Any algorithm, even the most sophisticated one, will fail when the characteristics of the input signal change. Any algorithm. Miracles don't happen ))))


Again, if we know in advance the nature of non-stationarity, we can put it into the algorithm and, by detecting this very non-stationarity, quickly adjust parameters of the controller.
Reason: