"New Neural" is an Open Source neural network engine project for the MetaTrader 5 platform. - page 97

 
Igor Makanu:

the code is simple, but our input data doesn't quite fit:

Wiki entropy: ".... measures the deviation of a real process from an ideal one. ... Mathematically, entropy is defined as a function of the state of the system, defined to an arbitrary constant."

и?

what in finance VR could be an ideal market? - who the hell knows, OK let that be the first assumption, perfect market = sine wave!

as inputs we have at least 3 prices high, low, clowes - and which one should we use? - OK, let it be the second assumption, median price rules!

what do we measure from and to? - beginning of the day? week? expiry day? trading session? - OK, start of the day, let it be the third assumption....

total of 3 questions, 3 times we assume we're right? here the problem comes down to combinatorics: how many times we derive the correct initial hypothesis and how many times our further exploration leads to the correct market valuation... on history ))))


entropy sounds nice, but I dug this subject some years ago from the perspective of informational entropy, the conclusion is one - if a pattern begins to form or the nearest repetition of candlestick combinations on history - it will not work, because what is obvious to everyone does not work in the market, the same thing with the patterns and correlations, once they become obvious - they stop appearing )))). I usually say to myself in such cases - you are not the smartest, such smart people live half the world away from the monitors)))

No, look, this entropy estimates the amount of information in BP. The lower entropy is, the more information (cycles are more manifested). I.e. the measure is relative, and to use it in relation to, for example, other states. Where it is lower is where to trade - a rough example

well and the analogy with Hearst, measure in sk. window
 
Andrey Dik:

If there are fewer neurons in the layer than in the previous one, information is compressed, and, "uncompressed", if there are more neurons than in the previous one.

OK. Thank you. I'll extrapolate from the information I've received for now. Later I will ask some more experts here. ))
 
Введение в понятие энтропии и ее многоликость
Введение в понятие энтропии и ее многоликость
  • habr.com
Как может показаться, анализ сигналов и данных — тема достаточно хорошо изученная и уже сотни раз проговоренная. Но есть в ней и некоторые провалы. В последние годы словом «энтропия» бросаются все кому не лень, толком и не понимая, о чем говорят. Хаос — да, беспорядок — да, в термодинамике используется — вроде тоже да, применительно к сигналам...
 
Dmitry Fedoseev:

Yes, they are generalised. If the input is, say, 100 bars, the output should be two commands: buy or sell.

The task is not to make a neural network hold a lot of data, but to match the amount of data it is trained on. If the network is too big and not enough data, it will learn easily, but it will not be able to generalize to other data. So the number of neurons should be as small as possible. More than three layers are kind of unnecessary. In the first layer the number of neurons corresponds to the size of the pattern of input data, and in the last one - to the number of resulting variants. And in the intermediate one there are as few as possible, but not less than in the output one.

Ok. I need to think about it. I'll let you know later.
 
Dmitry Fedoseev:

.... More than three layers are sort of unnecessary. In the first layer, the number of neurons corresponds to the size of the input data pattern, in the last layer - to the number of resulting variants. And in the intermediate layer it is as small as possible, but not smaller than in the output layer.

It has been proved mathematically (I met the proof in some books) that a network with one inner layer can approximate any continuous function, and a network with two layers can approximate functions with gaps as well. Thus, it follows from this proof that the number of layers greater than 2 makes no practical sense and leads only to retraining.

i.e. a maximum of 2 internal layers is necessary (and in many cases one is enough).
 
Maxim Dmitrievsky:

No, look, this entropy assesses the amount of information in BP. The lower the entropy, the more information (more cycles are manifested). I.e. the measure is relative, and to use it in relation to, for example, other states. Where it is lower, it is there to trade - a rough example.

and the analogy with Hurst, measure in the sk. window

Maxim well you're right, but in theory

Here is the chart, what can we take as the amount of information? 1 bar? - i.e., take a group of bars - we got a certain period, then how is our approach better than that of evaluating the current market situation using RSI, stochastic, or flea chart? - the same in any way, imho


the application of TS should be based on the market context - yes, but the context can hardly be formalized, some people try to take the current flat as a context and trade a flat, others draw a trend line and wait for a breakthrough .... and who is right?

 
Igor Makanu:

Maxim, you're right, but in theory.

here's a graph, what can we take as the amount of information? 1 bar? - not seriously, take a group of bars - we got a certain period, then how is our approach better than assessing the current state of the market using RSI, stochastic, flea-ridden? - the same in any way, imho


the application of TS should be based on the market context - yes, but the context can hardly be formalized, some people try to take the current flat as a context and trade a flat, others draw a trend line and wait for a breakthrough .... who's right?

I see... optimize the window, look at changes in entropy, train the model with different windows, and draw conclusions. It's clear that it shows the past, but if we narrow down the forecasting horizon and use the MOS to fill these intervals, we'll get the information

That's kind of what I'm saying.

It won't tell you if it has periodic cycles or not, it will tell you if it has entropy. I'm not saying that it will work, I'm saying that you have to datamine.

 
Maxim Dmitrievsky:

I see... optimise the window, look at entropy changes, train the model with different windows, and draw conclusions. It's clear that it shows the past, but if we lower the forecasting horizon and use the MOS to fill these intervals, we'll get the information

That's kind of what I'm saying.

It won't tell you if it has periodic cycles or not, it will tell you if it has entropy. I'm not saying that it will work, I'm saying that it needs to be datamined.

I'm not going to ask everything, but I'm tired of reading... How accurate is it to teach NS in a sliding window?

- If we're looking for periodic information - yes, it's correct, the NS will find hidden loops and adjust its own weighting coefficients

- if we teach the NS to recognize, yes it is the NS to learn

- there are no periodic cycles in the market, somewhere I have had a RE which draws the time of formation of the top of the RE, at any setting of the RE, there are never periodic repeats, there is no such thing that the next break of the RE will be in bars like 5,11,7,3....5,11,7,3.... - there will be all sorts of combinations but not repetitions.


if we train NS in a sliding window of non-periodic information, what happens to the weights there? - as far as i remember, you can't even train a single layer network on eu, only multilayer - can a sliding window be used for such things? i have my doubts


ZS: datamining - yes, if you manage to filter the data that will carry information - then the Grail is yours ;)

 
Igor Makanu:

I can't ask everything, and I'm tired of reading... How correct is it to train NS in a sliding window?

- If we are looking for periodic information - yes it is correct, the NS will find hidden cycles and adjust its own weighting coefficients

- if we teach the NS to recognize, yes it is the NS to learn

- there are no periodic cycles in the market, somewhere I have had a RE which draws the time of formation of the top of the RE, at any setting of the RE, there are never periodic repeats, there is no such thing that the next break of the RE will be in bars like 5,11,7,3....5,11,7,3.... - there will be all sorts of combinations but not repetitions.


if we train NS in a sliding window of non-periodic information, what happens to the weights there? - as far as i remember, you can't even train a single layer network on eu, only multilayer - can a sliding window be used for such things? i have my doubts


ZS: datamining - yes, if you manage to filter data that will carry information - then you have the grail ;)

They don't have to be strictly periodic, but they don't have to be noise. The picture is probabilistic, not strict. The sliding window is meant for entropy indicator as well as the number of features for training, you can opt for them.

If samples are inconsistent you won't get anything, that's why there are so many 50\50 errors. And a loop can't be contradictory, it either exists or it doesn't exist, in any form. If you add a lot of different loops, they don't contradict each other.

Cycle/non-cycle is a relative concept within the entropic metric

 
Maxim Dmitrievsky:

They do not have to be strictly periodic, but they do not have to be noise. The picture is probabilistic, not strict. The sliding window is meant for entropy indicator as well as the number of features for training, they can be optimized.

If samples are inconsistent you won't get anything, that's why there are so many 50\50 errors. And a loop can't be contradictory, it either exists or it doesn't exist, in any form. If you add a lot of different loops, they don't contradict each other.

A cycle is a relative concept within the entropy metric.

And how do you measure the degree of entropy on the data?

Reason: