Market etiquette or good manners in a minefield - page 66

 
StatBars >> :

These are gigabytes of inadequate models... not talking about the fact that today's market is not like tomorrow's etc.

Well here it is a counter-example... Who says they are inadequate (except those with errors)? Now you would need to give an example of an adequate model(its algorithm). So that the inadequacy of the others would become obvious.

 

The inadequacy of them is obvious, if the model does not earn, it means it is inadequate, and the reason for that is a fifth matter... The more so I know for sure that the series can be brought to a stationary one - so you cannot write off your mistakes on the fact that today's market is not the same as yesterday's.

The fact that your network retrains at every count is more likely a defect in preprocessing than in the fact that the market is changing.

 
StatBars писал(а) >>

The inadequacy of them is obvious, if the model does not earn, it means it is inadequate, and the reason for that is a fifth matter... The more I know for sure that the series can be brought to a stationary one - so you cannot write off your mistakes on the fact that today's market is not the same as yesterday's.

The fact that the network retrains at every count is more likely a fault in preprocessing than in the fact that the market is changing.

"The person asked" whether Neutron or paralocus thought about the very model used in the NS in this case. Practice today shows it, tomorrow it doesn't. The idea is based on the assertion that the market at each point in time is not like the previous one. Why then can it be predicted at all, using the last few bars? Where is that "chunk", on which the dependence between previous and following values is preserved? So far the experiments have been done with hourly bars. If I want to use pentameters - should I take all pentameters for the same number of hours? Or just the last X number of pentameters? And if I want to use H4? Won't the system work? If there is autocorrelation between 12,13,14,15 hours, will it be between 12.05, 13.05, 14.05, 15.05?

If there are no constant dependencies in the market, there must at least be a folding idea... If there isn't or can't be either - really better to roll the dice and take care of the loss manager.....

 
YDzh писал(а) >>

"The person was asking" how much thought Neutron or paralocus had given to the very model used in the NS in this case. Practice shows today, it does not show tomorrow. The idea is based on the assertion that the market at each point in time is not the same as the previous one. Why then can it be predicted at all, using the last few bars? Where is that "chunk", on which the dependence between previous and following values is preserved? So far the experiments have been done with hourly bars. If I want to use pentameters - should I take all pentameters for the same number of hours? Or just the last X number of pentameters? What if I want to use H4? Won't the system work? If there is autocorrelation between 12,13,14,15 hours, will it be between 12.05, 13.05, 14.05, 15.05?

If there are no constant dependencies in the market, there must at least be a folding idea... If there isn't or can't be either - really better to roll the dice and take care of the loss manager.....

If you want to know something then ask in order, in the form you ask them, they look like rhetorical...

 
StatBars писал(а) >>

If you want to know something, ask in order, in the way you ask them, they look like rhetorical questions...

I'm probably a primitive person... It seems to me that the question of what I'm leaning on before I rush into months of programming is not rhetorical at all. The answer may be "I once read that neural networks are a promising area for TC development. I decided to try it. By scientific experiment I found that I needed to retrain the network at each step and use opening / closing price as inputs". That would be the case in my case, only I'm moving in a slightly different direction. Just knowing Neutron and his love of mathematical "artifice" before he draws any conclusions, I figured he has something to cover, since he defends this method so ardently. That's why I'm interested in the theoretical part. I'm curious why it should work in his eyes.

 

gpwr, this thread is not meant for such "deep" questions. You just learn how to make a neural net here - that's all.

The question of what to feed to the input, what output to train it to, and what the grid architecture should be, is another matter entirely. This is where your "in-depth" questions come in handy.

 
StatBars писал(а) >>

There are transformations that can be used to convert to stationary data... I know for a fact that the series can be converted to stationary - so you cannot blame your mistakes on the fact that today's market is not the same as yesterday's.

The fact that the network retrains at every count is more likely a fault in preprocessing than in the fact that the market is changing.

YDzh wrote>>

The idea is based on the statement that the market at each moment of time is not the same as the previous one. Why then can it be predicted at all, using the last few bars? Where is the "chunk" where the dependence between previous and next values is preserved? If there are no constant dependencies in the market, there has to be at least a folding idea... If there isn't one either or can't be - it's really better to roll the dice and take care of the loss manager.....

We need to define what kind of statirnarity we are talking about.

Indeed, we can take a normally distributed SV with zero MO and variance equal to a constant. This is a stationary process by all its parameters, but one cannot earn on BP by integration of this process in principle! It is the Law. You can win, but you can't beat it statistically - martingale. So, this is an example of a stationary process, but not the one I mentioned above.

You can only make money on BPs like market BPs if you identify patterns between its counts (it doesn't have to be bars). This is the only requirement. However, it is not enough to identify such patterns, these patterns must be stationary. Such a requirement is natural, and follows from the working condition of an abstract MTS. It is stationarity of this type that I had in mind above and considered it obvious. Unfortunately, we cannot talk about stationarity to the full extent - the Market is not stationary in principle, otherwise you could make money in such a market like two fingers on the pavement! We can only talk about quasi-stationarity (almost stationarity or stationarity, which takes place during a time greater than that needed for its detection by the analytic unit). So, if at this level of understanding we could argue that such processes exist, then indeed we can limit ourselves to AR models... But, as you can already guess, these processes do not repeat and we are forced to "prepare" each time in advance an AR model with a non-linearity that would correspond to what is expected in the market! This is nonsense. It is for this reason that a non-linear Neural Network trained on every event, rather than once a month (as suggested here) is the most adequate tool to identify and beat events in a near-efficient market in a timely manner.

I am not claiming that NS is capable of making money in the market (the average profit exceeds the DC commission). I am only claiming that NS is the most adequate tool today, which should form the basis of TS. I argue, the only way to gain an advantage amongst similar devices, is the technique of over-learning on every event. It is an attempt to maximise the 'squeeze' of the potentials that are hidden in the NS and as a consequence, an attempt to maximise the squeeze of the kotir's patterns.

Mathemat wrote >>

The question of what to feed in, what output to coach it to, and what the grid architecture should be is another matter entirely. That's where your "deep" questions come in handy.

Alexey, always right.

Indeed, there is nothing secret about knowing how to properly build a NS and train it. It's a matchmaker that a self-respecting researcher should know. It is the basics!

Sacred knowledge begins exactly with the preparation of input data and definition of the target function for the NS. This will not be discussed here for reasons of principle. In this field of knowledge, everyone is a creator and an artist. It brings money and enjoyment. And it is certainly not hour bars! Which are shown in this thread only as a visual aid and to discuss what is better - "watch or 15 minutes" - can only be out of academic interest.

 
Finally my single layer is no longer dependent on the number of epochs (more than a certain number within 100). The stats block certainly helps a lot, but there are some questions. If you don't mind, please contact me in person.
 
And if I don't feel like going through a private message, can you post the question here?
 
Of course. I'll just make some charts.
Reason: