Using neural networks in trading - page 30

 
FAGOTT:
I will not even hide the harsh truth from you and tell you like an artist to an artist - modern econometrics has no methods for predicting non-stationary series. Only and exclusively stationary ones and those non-stationary ones that can be reduced to a stationary form
Not true. Apart from ARIMA, there are FARIMA. In state space models without any reduction. GARCH.... models. A lot has changed in the last 10 years. See the list of R packages, not only works with non-stationarity, but also has ready-made code.
 
EconModel:

We need to define the object we are working with. Where do neural networkers have this definition? What do they work with? With layers, perseptrons?

The premise: we observe the realisation of an unsteady process, usually of interest for the last 30-50 observations at most.

Then we decide what we trade. Most people trade a trend. We watch and see the trend and believe that the trend will be in the future and the past has nothing to do with it. We just believe and the past is just for model building.

This is the initial premise.

And then there are the nuances.

Well, that's easy!

NS work with incoming data, outgoing data and the network itself.

20-30 observations is not enough even for a conventional autoregression, let alone NS.

If you're using NS, there are no "trends".

That's all you're not talking about NS

 
FAGOTT:

well, that's easy!

NS work with incoming data, outgoing data and the network itself.

20-30 observations is not enough even for ordinary autoregression, let alone NS.

If you're using NS, there are no "trends".

That's all you're not talking about NS

Of course, I mean the money. And the NS is an intellectual toy for people with an intelligence far above average.
 
EconModel:
Not really. Apart from ARIMA, there are FARIMA. In state space models without any reduction. GARCH.... models. A lot has changed in the last 10 years. See the list of R packages, not only works with non-stationarity, but also has ready-made code.

You're confusing me again! You're always trying to confuse me!

I don't remember FARIMA, but GARCH definitely works with stationary series - I understand that it introduces a necessary stationarity condition and the unconditional variance of the process is constant.

Maybe you mean IGARCH?

 
FAGOTT:

You're confusing me again! You're always trying to confuse me!

FARIMA I don't remember.

FARIMA is ARIMA with fractional integration. Synonym for Hearst, long tails.

GARCH is a bunch of them. The simulated residual has a variable variance in several senses. The spread of the residual from the GARCH simulation is usually smaller than the spread - negligible.

 
EconModel:


GARCH "works" with fixed rows
 
EconModel:

Maybe I don't understand something.

We classify into patterns . We believe that such a pattern will surely emerge in the future and we will be able to use this knowledge to make predictions. Right?

On what grounds? Who has proved that there will be such a pattern at all, or slightly or strongly changed?

IMHO if we teach the net to recognise the handwritten letter "a", then there is absolute certainty that this letter will be in the future, because it exists in the language and if in the future most people start writing with their feet, there will still be an "a", just the lettering will change and perhaps the net will have to be further trained. It speaks of stationarity.

Quotations are a non-stationary process in principle, i.e. there are some kind of deviations all the time, different at different times, which are comparable (superior) to the stationary part. This is the problem - the non-stationarity of the original: Russian letters today and Chinese letters tomorrow. One has to look for the objective reality that the letters reflect. And that's what neural networkers don't do.


I think you do NOT understand. I think you just have it all mixed up. Patterns as such have been, are and always will be. The first book I read was about TA. And it was a doc from '90 some year. All figures described there are still present. And most of the figures of technical analysis can be called patterns. Besides the "first-second" wave (the market impulse) is not a pattern? With a development into a third. Or not a development. Or, for example, "impulse-bounce-impulse-bounce" - Gartley's butterfly. I mean, look at the chart right now, there's a lot of butterflies. And Gartley described this model back in 1935. In general, the existence of patterns can certainly not be worried about for a long time to come.

Except that I'm not sure that patterns need to be classified. I did an experiment with a single layer perceptron on recognizing simple patterns. The perceptron learns quickly and recognizes them all. And, of course, the pattern pattern floats. The Perceptron is not bothered by it. So it turns out that classifying patterns is not really necessary. But perhaps it is necessary to classify the "environment" of the patterns. Then you might find out that the "neighborhood" class of the same patterns differs in different places and this difference should affect something. But this is speculation. We should check it out...

 
EconModel:

We need to define the object we are working with. Where do neural networkers have this definition? What do they work with? With layers, perseptrons?

The premise: we observe the realisation of an unsteady process, usually of interest for the last 30-50 observations at most.

Then we decide what we trade. Most people trade a trend. We watch and see the trend and believe that the trend will be in the future and the past has nothing to do with it. We just believe and the past is just for model building.

This is the initial premise.

And then there are the nuances.


Saw the phrase. A long time ago. I liked it. I don't remember the source. "In the future it will be the same, only different."
 
Something tells me the whole argument is about to end - with fractals.
 
Alexey_74:


I think you do NOT understand. I think you just have it all mixed up. Patterns as such have been, are and always will be. The first book I read was about TA. And it was this doc from '90 some year. All figures described there are still present. And most of the figures of technical analysis can be called patterns. Besides the "first-second" wave (the market impulse) is not a pattern? With a development into a third. Or not a development. Or, for example, "impulse-bounce-impulse-bounce" - Gartley's butterfly. I mean, look at the chart right now, there's a lot of butterflies. And Gartley described this model back in 1935. Anyway, there's definitely no need to worry about the existence of patterns for a long time to come.

But I'm not sure that patterns should be classified. I did an experiment with a single-layer perceptron on recognizing simple patterns. Pepper both learns quickly and then recognizes them all. And, of course, the pattern pattern floats. The Perceptron is not bothered by it. So it turns out that classifying patterns is not really necessary. But perhaps it is necessary to classify the "environment" of the patterns. Then you might find out that the "neighborhood" class of the same patterns differs in different places and this difference should affect something. But this is speculation. We have to check...

The "head and shoulders" are and will be. As well as other thousands of patterns known in TA and which are yet to be found with (or without) NS. But tell me, what is the probability that if the right shoulder "head and shoulders" is broken, then the price will go down, and even more precisely, what is the confidence interval of downward direction?

In econometrics - the forecast confidence interval is the basic question. And when you try to answer that question, non-stationarity comes out, and with it a lot of problems that cannot be solved by NS, because they have nothing to do with classification.

Patterns are taught for 18 hours and taken for credit, with the main question being: do you understand that patterns cannot be used in trading?

So I don't have anything jumbled up, but lying flat, at least in this one.

Reason: