Piligrimus is a neural network indicator. - page 2

 
Piligrimm >> :

In any case, there is a lot of potential for improvement; you can significantly increase the smoothness and introduce additional signals.

Smoothing is in technical language the cutting off of high-frequency components

How many decibels of damping have been achieved between the maximum frequency of the signal and a frequency one octave higher?

 
EvgeTrofi писал(а) >>

Can you please tell me where I can get the Batteraut's VFD?

Yes, please!

K is the order of the filter. Better not to put more than 2 - it greatly increases the FS.

Files:
baterlout.mq4  2 kb
 

zfs писал(а) >>
В чем смысл вашего индикатора, товарищ... он напоминает обычную среднюю.

What is the point? Explain, for example, .... because your indicator looks like a simple average, ...........

Neural networks are self-learning systems built on elements that simulate how a human brain neuron works.

The McCulloch-Pitts neuron model consists of a body (soma) and offshoots (axons), the ends of which dock with the bodies of other neurons. The junction is called a synapse. A synapse is characterised by the strength of the synaptic connection w. If neuron i has synapses with binding forces wi1, ...,win, the impulses from other neurons (Sj) are summed up in it, and output:



Neuron model.


As an activation function (transient function) f() of a neural network a simple step function, a symmetric or asymmetric S-function or a linear-step function are usually chosen (see fig.).


Fig. simple step, asymmetric and symmetric S-shaped activation functions.


If we denote wij, the binding strength of the j-th neuron on the i-th neuron, a neural network consisting of n neurons will be fully characterized by the matrix of synaptic connections:



Usually the simplest neural networks, so-called layered neural networks, are used. Inputs of each layer are connected only with outputs of previous neurons. The first layer is called the input layer, the last one is called the output layer, and the rest are called the hidden (inner) layers. An example of such a neural network: 4 - 8 - 5 - 3. This means that the neural network consists of 4 layers: the input layer has 4 neurons, the output layer has 8, and the two hidden layers have 8 and 5.
The neural network is controlled (trained) by changing the strength of synaptic connections in the W matrix. A neural network can be used as a self-training system, or it can be pre-tuned with specially selected samples (training with a teacher). When a neural network is tuned for a given set of input signals, the network generates output signals that are compared to samples, the deviations from which are evaluated using a specially selected loss function (e.g. equal to the standard deviation). The matrix of synaptic connections is then modified to minimise the loss function (usually by gradient descent). A neural network can thus be classified as additive, nonlinear and nonparametric regression models.


Fig. Simple step, asymmetric and symmetric S-shaped activation functions.



The performance of a neural network is a weighted estimate of its three properties:
degree of convergence - the accuracy with which the model has adapted to given input values;
The degree of generalisation (generalisation) - the accuracy with which the model operates on input sets beyond those given to it;
stability - the measure of dispersion (deviation) in the accuracy of its predictions.
The above properties of a neural network can be influenced by the following procedures:
selection of a suitable activation function
selection of a suitable loss function
selection of the architecture (structure) of the network
selection of the parameters for gradient descent
Choice of training time Example of a neural network application in technical analysis The main purpose of neural network training is to build connections (associations) between observed formations. Neural networks are useful for making a decision on signals coming from several technical indicators. Different technical indicators are effective at different market conditions. As we said before, trend-following indicators are effective when there is a trend while oscillators are useful when the market is fluctuating in a range.

Let us show by a simple example (A.-P. Refenes, A. Zaidi ) how a neural network can be used in this case. Suppose the next task is to find a mixed strategy based on a combination of two strategies, each based on signals from two simple indicators: the moving average (MA) and the deviation from the mean (MV).

MA is a simple indicator, which compares two moving averages with different averaging periods and gives a buy signal when fast MA crosses slow one from bottom to top, and a sell signal when it crosses it from top to bottom.

MV is a simple indicator, which gives a sell signal when the price is above its average, and a buy signal otherwise.

The system structure is shown in Figure 91.

The system receives indicator signals (0 - short position, 1 - long position) and information on the Indicators performance for the last 2 days (profit or loss) as well as the current market information.
There are three signals on the output:

MA: follow the recommendation of the MA-indicator

MV: follow MV-indicator recommendation

NP: do nothing

Each output takes on a value between 0 and 1.


Fig. Schematic of a neural network for the analysis of two indicators.


If both MA and MV signals are in the ON state (take values greater than 0.5), then the signal recommendation with the highest value is selected, but if NP is in the ON state, then nothing is done.

This example application of a neural network has an example of its use,..... and what do you have it assuming?

 
Neutron писал(а) >> Butterworth 2nd order LPF (

Isn't that Butterworth?

 
Neutron писал(а) >>

Indeed, Butterworth's 2nd order LPF (red line) shows not much worse results compared to your neural network filter. By the way, where is the NS in the code, and why is your child redrawing? This is a rhetorical question. Since, in redrawing, what we see in the story does not correspond to reality, the real question arises: why are you showing us something that does not really exist?

The whole formula is an amalgamation of several networks trained with different parameters, and reduced to a single polynomial with their weight coefficients relative to each other.

The entire polynomial is recalculated at every tick, since the history does not change and only the zero bar quotes change, while the results of recalculation at the zero bar remain unchanged. There is no redrawing.

 
sab1uk писал(а) >>

damping - in technical language, cutting off high-frequency components

in the current version, how many decibels of attenuation have been achieved from the maximum frequency of the useful signal to a frequency, say, an octave higher?

I haven't checked that.

 
Infinity писал(а) >>

There is an example of using a neural network in this example,..... and what do you have in mind?

I use a neural network as a filter, it's a slightly different task from the one you describe.

 
Piligrimm >> :

I didn't check that.

>> that's the way it is... it's all by eye.

 
Piligrimm писал(а) >>

I use a neural network as a filter, which is a slightly different task to the one you describe.

You got some coefficients for your digital filter. Basically, it's an MA with ridiculous coefficients, just like any digital filter. What do you actually want it to do? What do you want to filter and how fast do you want it to respond to changes?
And how do you optimise it?

Infinity, Thank you for the explanation. Very simple, clear and logical.

 

I understand what a neural network is, and I see more sense in the example than in this indicator.

The input is a bunch of incomprehensible coefficients. The output is an average. It doesn't even make sense as an expa.

Reason: