neural network and inputs - page 4

 
Demi:
Yes of course! Pair trading to the landfill, etc. etc.

You, of course, know better. (sarcasm)

I wasn't talking about paired trading.
 
Demi: Show the result.
I've already shown you the result ))))
 
LeoV:
I wasn't talking about paired trading.

pair trading and is based on "divergence".

I got it, thank you.

 
Demi: Pair trading is based on "divergence".
I wasn't talking about divergence, I was talking about patterns between different instruments, which didn't include correlation divergence )))
 
Figar0:

Interesting way of putting it... What is the network? And on the input accordingly some disposition of previous fractals? Just as for me, your two outputs are rather a problem for two networks...

Is it the usual normalization on the maximum? I.e. 10; 1; -5 normalized to 1; 0.1; -0.5?

It's not very clear about the normalisation of the weights. Do you normalise them in the same way too? Equally as intermediate layer-by-layer results? Or have I misunderstood something? If correct, then I'm afraid you're going to have some stones here.

The questions are strange in isolation from the context. What can you advise about the output without knowing the type of network and its task? The same goes for input...



I didn't work directly with the price. I use the difference between the price and the parabolic. Why a parabolic? It has characteristic jumps and dips and I use Fibonacci shifts of bars on the output for the signal not to fluctuate significantly. So when I have 8 differences, I normalize them in this manner, i.e. I find the maximum modulo and divide everything by this coefficient. Then the weight. I don't normalize it, but the resulting sums in layers should naturally decrease using the same principle. And so on, layer by layer, until I get two output values. If to normalize also weight, there are suspicions, that at training values of weights will aspire to extreme values -100, 100 and 0, and it not ace, so I normalize only intermediate layer-by-layer results.
 
Over-normalisation can introduce distortions into the original signal in which the useful information resides, which can lead to the disappearance or reduction of the proportion of that useful information, which in turn leads to the network not working as it should - earning. This is why you have to be very careful with normalisation in fintech markets.
 
LeoV:
Over-normalisation can introduce distortions into the original signal in which the useful information resides, which can lead to the disappearance or reduction of the proportion of that useful information, which in turn leads to the network not working as it should - earning. This is why you have to be very careful with normalisation in fintech markets.

There is a possibility to normalize neither input nor intermediate signals at all, until the signal goes through all the layers, its level will just grow to the desired range, +/-, and the output will be normalized already... It goes like this.
 
grell:

It is possible not to normalise the input or intermediate signals at all, until the signal has passed through all the layers, its level will have just risen to the required range, +/-, and the output will be normalised... It goes like this.
The problem is that different inputs can have different scaling. Grid, like any other algorithm, does not like variable scaling to be very different (for example, half of inputs has range [-0.0001;0.0001], and the other half has range [-1000;1000]). The convergence of the training can be affected by this. Therefore it is desirable, if not to normalize, then at least to set inputs in comparable scales, ideally of the same order: roughly speaking, NS will simply learn faster.
 

Which market are you discussing here: stationary or non-stationary?

 
faa1947:

Which market are you discussing here: stationary or non-stationary?


And why are you sp'gashing? Are you looking for someone to blame for your illiteracy? DDD

Seriously, what's the catch?)

Reason: