"New Neural" is an Open Source neural network engine project for the MetaTrader 5 platform. - page 36

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I don't know if the cognitron is anything like that.
Waiting for the sequel :)
We need information on
-Conjugate gradient descent
-BFGS
I will now try to present my idea of building neural networks for analyzing price patterns. Those who have read my lectures 2 and 3 will immediately understand. The essence is the classification of price patterns by Buy, Sell, or Hold. The price at a certain period of time (say, 100 bars) is filtered by a layer of simple neurons S1. The input weights of these neurons describe the impulse characteristics of the filters. In the example of visual cortex, these weights described straight segments of different slope and length in two-dimensional image space. In quotes we also have a two-dimensional space: time and price. We can assume that S1 filter weights in our case also describe straight segments in time-price space of two possible inclinations: up and down. The slope angle depends on the length of each filter. These lengths can be pre-selected, e.g. 4, 8, 16, 32 bars. Each filter is a straight line normalized so that the sum of all values equals zero and the sum of squares equals 1 (or other normalization). In the next layer, let's call it S2, more complex patterns are formed from segments of layer S1, and so on. At the output of this multilayer transformation of quotes, we have a digital code describing the current pattern, the codes of patterns similar to each other, but differently stretched in time and price, are the same. These codes are fed to the inputs of the Support Vector Machine (SVM), which is pre-trained to determine Buy, Sell, or Hold conditions on historical patterns. The problem here is determining the shape of the filters in layers S1, S2, etc. I chose straight bars and their combinations for simplicity. By the way, in the model of visual cortex HMAX, all shapes of spatial filters are pre-selected based on biological experiments. We need to find an algorithm for automatically determining these filters. Such algorithms have already been developed for visual layer V1 (von der Malsburg, Linsker, Miller, LISSOM, Olshausen). We can borrow them for our task of classifying price patterns.
In my time I have studied in detail almost all methods of training direct propagation networks. I'm sure that of the gradient descents, the Levenberg-Marcadt method is the best(https://en.wikipedia.org/wiki/Levenberg%E2%80%93Marquardt_algorithm). It always finds a better minimum than all kinds of BACKPROPs and RPROPs, and even faster. What I posted on BPNN (some kind of RPROP) is child's play compared to LM. BFGS is more time consuming and the result is no better than LM.
I agree! In NeuroSolutions, for example, the Levenberg-Marcadt method converges where the other methods stand on local minima,
However, LM requires significant computational resources. The time per pass is longer
...
Each filter is a straight line normalized so that the sum of all values is zero and the sum of squares is 1 (or other normalization).
...
I have no proof now, but my intuition tells me that this double condition is contradictory,
the sum is 0 and the sum of squares is 1
will hold for a very narrow number of choices. If I'm wrong please kick me.
I have no proof right now, but my intuition tells me that this double condition is contradictory,
the sum is 0 and the sum of squares is 1
will be executed for a very narrow number of choices. If I am wrong, please kick me.
I see. But it is tied exactly to NN. It will be tied more accurately.
I'm afraid that it won't turn out something like "AWVREMGVTWNN" :) The main thing is to convey the essence, the nuances are not so important.
Meta Neuro Engine (MNE)
Meta EngiNeuro (MEN) (c)
so we are engineers :)