How to form the input values for the NS correctly. - page 11

 
LeoV писал (а) >>

Input values are not a simple question either. What to supply to the mains input depends on many factors -

1. What type of neural network is used.

2. What we want to get from the neural network - price or movement direction or reversal points or the next candle or something else.

So we should first decide on these questions, and then decide on the inputs.

I suggest to speak so far only about two networks

1. Kohonen net

2. MLP.


We'll leave the rest for later.
 

1. Neural network is 4-5 layer conventional (direct propagation without rings, stars or anything else).

2. from the neural network we want to get bubbles... Oh no. For a start we want to receive direction of movement and if possible an estimation of its force (sort of in pips).

For me so far (according to my understanding) neuronet remains an approximator. Which means I am actually trying to express a functional relationship between input variables and output variables. And hence my thoughts about inputs. They should not be too complex and we should not want too much in the output. It is more or less clear with outputs (direction, altitude), but what is for input. I am racking my brains for the third day. It is very frustrating that I cannot invent such a treatment of the input signal so that it always lies in a fixed range. For if we normalize to a maximum of the total sample, then there is always a possibility that a higher value will appear in the future, which the network will not know and what will happen in such a case I don't know. Of course there are variants of input processing like sine or sigmoid, but it seems wrong to me, because I want to have linear compression.

For the output I will probably use hyperbolic dependence. (Up-Dn)/(Up+Dn). Turkey is attached.

Files:
_target.mq4  2 kb
 
TheXpert писал (а) >>

I propose to talk about only two networks for the time being

1. Kohonen network

2. AN MLP


What is MLP - multilayer perseptron?

Kohonen is karachno. But probably for the second time. Although... it would be OK to understand what and where.

By the way, Kohonen is teaching without a teacher, isn't it?

 
sergeev писал (а) >>

What is an MLP - multilayer perseptron?

Yes

By the way, Kohonen is learning without a teacher, right?

In the original version yes. But there is a modification called supervised Kohonen, in which case we assign a winner for each pattern ourselves.

 
So what about the inputs. Who has what networks and inputs?
 
sergeev писал (а) >>

2. from the neural network we want to get ba... Oh no. To begin with we want to get the direction of movement and, if possible, an estimation of its strength (like in pips).

It doesn't have to be better, just the direction is enough. If it works, the power can be screwed on top.

So far for me (in my terms) the neural network remains an approximator.

Yep, that's the way it is.

Which means I'm actually trying to express a functional relationship between input variables and output variables. And hence my thoughts about inputs. They should not be too complex and we should not want too much in the output. It is more or less clear with outputs (direction, altitude), but what is for input. I am racking my brains for the third day. It's frustrating, though, that I can't think of any way to process the input signal so that it always lies in a fixed range.

MACD

Of course there are variants of input processing such as sines or sigmoid, but it all seems wrong to me, because I want to have linear compression.

Here. The task here is not compression but division, so it's not linear compression but linear division which even XOR won't be able to split.

Therefore non-linearity must be present. There is a theorem that any n-layer linear perseptron can be converted to a 2-layer analog -- input -> output.

So the linear perseptron is screwed.

 
sergeev писал (а) >>
So what about the inputs. Who has what nets and inputs?

Trying to feed limited oscillators, but the results are as far as the sky. There's a huge amount of work to be done to get any conclusions.

 
TheXpert писал (а) >>

MACD

it can have a maximum update.

Here. The problem is not compression but separation, so the result is not linear compression but linear separation, which even XOR can't separate.

Therefore non-linearity must be present. There is a theorem that any n-layer linear perseptron can be converted to a 2-layer analog -- input -> output.

What is meant is not the linearity of the output signals. It is about linearly compressing the input data before feeding it into the network. Compression to the range [0, 1] based on all orbits data. If there will be a conversion to the range by some non-linear function, we will get a saturation for large values, between which there will be no difference. This means that there will be repeatability and hence inconsistency . The compression has to be done in a linear way. But how, so that it remains maximal in the future. (my brain hisses)?

 
sergeev писал (а) >>

it can have a maximum update.

Yeah, sure, but that sounds like a good option to me.

What is meant is not the linearity of the output signals. It is about linearly compressing the input data before feeding it into the network. Compression to the interval [0, 1] based on all orbits data.

>> Ah, I see.

 
sergeev писал (а) >>

If there is a conversion to a range by some non-linear function, we will get a saturation for large values, between which there will be no difference. This means that there will be repeatability and therefore inconsistency . The compression has to be done in a linear way. But how, so that it remains maximal in the future. (my brain hisses)?


That's why we'll use decorelation and the like %)