Using neural networks in trading - page 28

 
2 leksus

What you have written has already been discussed several times in this forum and not only. So I don't have the energy to write everything a second time .......
 

Robot_al:
... Форекс мне стал более-менее понятен, даже, когда он непонятен - это всего лишь инверсия "понятности".

... Well, forex is just a place where I was able to apply the logic of George Bull's algebra.


Cool...

Great...

 
LeoV:

What you have written has already been discussed several times in this forum and not only. So I don't have the energy to write everything a second time round .......


So I haven't read this forum carefully. Because I haven't seen any such discussions. Anyway, I don't insist. And to write a second time round is really not a grateful task.

Coming to this forum, I only asked myself one question. That question seems to have been answered.

 
Alexey_74:

solar, I draw your attention to the fact that I hinted at the problem of classification. and this is the principle of learning without a teacher. i.e. there is no "out" concept here.

P.S. sorry, for some reason I had to change my nickname leksus to the current one.


Just before you concentrate on classification completely. But think about this - if you teach the network 5 letters of the alphabet for example, how will you do text recognition after scanning it?

I'm not trolling you at all. The main message - the completeness of information is needed. You're concentrating on the wrong thing so far, imho.

 
solar:

Just before you focus on the classification completely. Think about it - if you teach the network 5 letters of the alphabet for example, how are you going to do text recognition after scanning?

No, of course I won't be doing text recognition. There's no point in learning all five letters...

I'm not trolling you at all. The main point is that you need to be full of information. You're concentrating on the wrong thing so far, imho.

Thank you, I'm trying to be constructive too. And it seemed to me that we are talking about different things. In my complaining about difficulties with classification I meant the following.

Let's take the classical case - the plane. The theory states that data (in the case of the plane) should be linearly separable to produce a successful classification.

(sorry, I didn't have any nice pictures, I had to make some quick pics in Excel).

Suppose we took data with 2 parameters X and Y (the plane...). We've attached them to unit vectors and got the following picture. We see 5 distinctly separate areas. Any SOM can manage classification at once and the classification will be just a classification. Any new data will fall into one of the classes. The properties of each class are known to us, so simply by finding out which class the new data falls into, we immediately know everything about it. With all that this implies...

Unfortunately, classical and practical cases, as they say in Odessa, are two big differences.

In the practical case we unloaded the data and got a picture like this. Classification is certainly possible in this case too, but it is of no practical value. We can specify the same 5 classes and SOM will honestly "draw" them, just evenly distributing the cluster centres. The newly arrived datum will go somewhere. But this "somewhere" doesn't make any sense anymore. All data, as well as their properties, are evenly scattered (jumbled) across the plane. If we believe such a classification and attribute a new datum to one of the classes, we are just fooling ourselves.

This is the crux of the problem, and what I meant in that post of mine. So, no matter how I looked at the problem, I never managed to get data with clear separability. So either there is no separability at all, so don't even try. Or I don't have enough traction. Mother Nature has blessed me with some self-criticism, so I am leaning towards the second option. Therefore I consult with various comrades. Once you have a clear classification, you can then work with a probability grid and fuzzy logic.

 
Alexey_74:

No, of course I'm not doing text recognition. There's no point in learning all five letters...

Thank you, I'm trying to be constructive too. And I thought we were talking about different things. In my lament about difficulties with classification I meant the following.

Let's take the classical case - the plane. The theory states that data (in the case of the plane) should be linearly separable to produce a successful classification.

(sorry, I didn't have any nice pictures, I had to make some quick pics in Excel).

Suppose we took data with 2 parameters X and Y (the plane...). We've attached them to unit vectors and got the following picture. We see 5 distinctly separate areas. Any SOM can manage classification at once and the classification will be just a classification. Any new data will fall into one of the classes. The properties of each class are known to us, so simply by finding out which class the new data falls into, we immediately know everything about it. With all that this implies...

Unfortunately, classical and practical cases, as they say in Odessa, are two big differences.

In the practical case we unloaded the data and got a picture like this. Classification is certainly possible in this case too, but it is of no practical value. We can specify the same 5 classes and SOM will honestly "draw" them, just evenly distributing the cluster centres. The newly arrived datum will go somewhere. But this "somewhere" doesn't make any sense anymore. All data, as well as their properties, are evenly scattered (jumbled) across the plane. If we believe such a classification and attribute a new datum to one of the classes, we are just fooling ourselves.

This is the crux of the problem, and what I meant in that post of mine. So, no matter how I looked at the problem, I never managed to get data with clear separability. So either there is no separability at all, so don't even try. Or I don't have enough traction. Mother Nature has blessed me with some self-criticism, so I am leaning towards the second option. Therefore I consult with various comrades. Once you have a clear classification, you can work with a probability grid and fuzzy logic.

Typical TA reasoning, blind faith in the postulate - "History repeats itself".

Everything you write about is good (maybe) for data analysis, but it is not even close to good for forecasting.

Why do you think that the classes identified as a result of successful classification (assuming you managed to solve this problem) will be in the future? The main issue is not the classification, but the predictability of the method, the confidence in its use in the future. That is a completely different problem. That's why neural networks have very limited value in trading. IMHO.

 
Alexey_74:


As a rule, the full power of networks can be harnessed if you use indirect, rather than direct, data that has a stable relationship to the object.

For example, the illuminance of objects during the day and in the evening, will depend on the angle of the sun. And if you use the sun's illumination data, you will get the illumination of the objects.

The point of networks is to reconstruct information from related events. A network is not a magic function, it has as much magic as any mathematical function.

I won't insist, but if you want to classify, approximate, predict, interpolate or anything else ..... any tool, you will need to file it. everything, I will highlight all data related to it. And this is not only OHLCV transformed in any way. For example can the movement of gold have any effect on any instrument ? Oil ? And so on.....

Good luck in general with this difficult task for you.

 
solar:

Maybe you shouldn't stick anything in there after all? Maybe there should be some connection between what goes in and what comes out?

Are you suggesting dollars for the input?

I've been thinking. Maybe that's right. ))

 
EconModel:

Typical TA reasoning, blind faith in the postulate - "History repeats itself".

Blind faith has never existed. Materialist to the core. But I am absolutely convinced that "history repeats itself". I believe that it repeats itself. That does not mean that the price today at 3pm will behave the same way it did last Tuesday at 3pm, or something similar.

Everything you write about is good (maybe) for data analysis, but it's nowhere near good enough for forecasting.

Why do you think that the classes identified as a result of successful classification (assuming you have succeeded in solving this problem) will be in the future? The main issue is not the classification, but the predictability of the method, the confidence in its use in the future. That is a completely different problem. That's why neural networks have very limited value in trading. IMHO.


Both analysis and prediction are in the same cup in this case.

About the classes I am not counting, at the moment I am only speculating. And you are right, the main issue is not classification. Classification is only a kind of base. And further (goal) is precisely predictability. But here I don't count anything either. I do not know whether it can work or not. I will find out when I implement the "device in iron". Only then it will become known.

 
Alexey_74:

Blind faith has never existed. A materialist to the core. But I am absolutely convinced that "history repeats itself". I believe it is repeating itself. This does not mean that the price will behave the same way it did last Tuesday at 15:00, or something similar.


Both analysis and prediction are in the same cup in this case.

About the classes I am not counting, at the moment I am only guessing. And you are right, the main issue is not classification. Classification is only kind of a base. And further (goal) it is predictability. But here I don't count anything either. I do not know whether it can work or not. I will find out when I implement the "device in iron". Only then it will become known.

Maybe I don't understand something.

We classify into patterns. We suppose that such a pattern will appear in the future and we will be able to use this knowledge for forecasting. Right?

On what basis? Who has proved that there will be such a pattern at all, or slightly or strongly changed?

IMHO if we teach the net to recognise the handwritten letter "a", then there is absolute certainty that this letter will be in the future, because it exists in the language and if in the future most people start writing with their feet, there will still be an "a", just the lettering will change and perhaps the net will have to be further trained. It speaks of stationarity.

Quotations are a non-stationary process in principle, i.e. there are some kind of deviations all the time, different at different times, which are comparable to (exceed) the stationary part. This is the problem - the non-stationarity of the original: Russian letters today and Chinese letters tomorrow. One has to look for the objective reality that the letters reflect. And this is what neural networkers do not do.

Reason: