Using neural networks in trading - page 33

 
Roman.:


Handsome - beautiful.

In the Army, a comrade of mine had this inscription tattooed on his shoulder... :-)

I think the subject is not covered, at least in terms of organisation and preparation of input data to the network...

I agree. My posts are off-topic. If you'll excuse me.
 
EconModel:

We need to define the object we are working with. Where do neural networkers have this definition? What do they work with? Layers, perceptrons?


Why do neural networkers need any layers or perceptrons? Let the developers of neural networks deal with them.

For a neural networker, a neural network is a black box.

All a neural network engineer should have for setting tasks: values on inputs and values on outputs - sample which is divided into two parts - training and testing (forward test). Accordingly, the task is to ensure that values on inputs with maximal probability coincide with values on outputs of the neural network in test sample after training it on the training sample.

 
EconModel:

You have to start here. Start with a simple one.

Stationary series = Mo and variance is a constant. With ARCH the variance is not only not a constant, it also depends on the previous values.

When building models, a check for ARCH residual from models is mandatory, because MOC cannot be applied in the presence of ARCH.

start from here - it couldn't be simpler.

Don't ask me - all this has been explained on this forum long time ago.

 
EconModel:

Patterns are taught for 18 hours and accepted for credit, with the main question being: do you understand that patterns should not be used in trading?

With that being said, I would say that your teacher was at the very least a little off base.
I'm not being sarcastic, I was a student too. I'm curious, after which answer was credit given?

 
Alexey_74:

With that being said, I would say that your teacher was, at the very least, a little off-topic.
I'm not being sarcastic, I was a student too. I'm curious, after which answer was credit given?

I'm sorry, students come in all kinds of majors. TA is taught in a fortnight to anyone in DC, while econometrics takes 5 years and not everyone wants it.

I can say it again: in TA there is no question of the probability of the forecast being fulfilled at all. We just believe and leak and don't understand why. In econometrics the question of confidence in the results of modelling is the main issue. Just like in life.

 
Reshetov:

Why do neural networkers need any layers or perceptrons? Let the neural network developers deal with them.

For a neural network designer, a neural network is a black box.

All a neural network engineer should have for setting tasks: values on inputs and values on outputs - sample which is divided into two parts - training and testing (forward test). Accordingly, the task is to ensure that values on inputs with maximal probability coincide with values on outputs of the neural network in test sample after training it on the training sample.


Dear Yuri, I would like to ask you not to use such a statement in a general sense (I mean about all neural networkers). You see, neural networkers (in a general sense) quite often worry about the number of hidden layers, and also periodically the number of neurons in these hidden layers. And also sometimes there are difficulties with the choice of activation function. And sometimes you have to choose a gradient descent method as well. I'm not offended at all, not at all. But still, you've oversimplified the situation.
 
EconModel:
Agreed. My posts are off-topic. If you'll excuse me.

It's okay. Talking...
 
Alexey_74:

And sometimes the network architecture can also be different .
 
EconModel:

Sorry, students come in all kinds of specialisations. TA is taught in a fortnight to anyone in DC, while econometrics is taught for 5 years and not to everyone.

I can reiterate: in TA the question of the probability of forecast execution is not raised at all. We just believe and leak and don't understand why. In econometrics the question of confidence in the results of modelling is the main issue. As in life.


I'm not arguing. Econometrics is you, not me. I only have three years of mathematics under my belt in a non-mathematics department. And TA has never been required tohave a"confidence interval". The TA indicates (shows the trader) the occurrence of a situation, after which an event will happen with a high degree of probability. As a rule, to one side or the other. In other words the forecast of the event and only the event. And where exactly the TA target was never, with rare exceptions. More often than not, it is "wherever it blows, that's where we trade".

EconMod, I've thrown out the white flag. I was tired of operating in this mode. Truth stopped trying to get a word in edgewise about 7 pages ago. Peeing for the sake of peeing, that's not my thing. I've been quiet since I was a kid.

 
Alexey_74:

No, of course I'm not doing text recognition. There's no point in learning all five letters...

Thank you, I'm trying to be constructive too. And I thought we were talking about different things. In my lament about difficulties with classification I meant the following.

Let's take the classical case - the plane. The theory states that data (in the case of the plane) should be linearly separable to produce a successful classification.

(sorry, I didn't have any nice pictures, I had to make some quick pics in Excel).

Suppose we took data with 2 parameters X and Y (the plane...). We've attached them to unit vectors and got the following picture. We see 5 distinctly separate areas. Any SOM can manage classification at once and the classification will be just a classification. Any new data will fall into one of the classes. The properties of each class are known to us, so simply by finding out which class the new data falls into, we know everything about it. With all that this implies...

Unfortunately, classical and practical cases, as they say in Odessa, are two big differences.

In the practical case we unloaded the data and got a picture like this. Classification is certainly possible in this case too, but it is of no practical value. We can specify the same 5 classes and SOM will honestly "draw" them, just evenly distributing the cluster centres. The newly arrived datum will go somewhere. But this "somewhere" doesn't make any sense anymore. All data, as well as their properties, are evenly scattered (jumbled) across the plane. If we believe such a classification and attribute a new datum to one of the classes, we are just fooling ourselves.

This is the crux of the problem, and what I meant in that post of mine. So, no matter how I looked at the problem, I never managed to get data with clear separability. So either there is no separability at all, so don't even try. Or I don't have enough traction. Mother Nature has blessed me with some self-criticism, so I am leaning towards the second option. Therefore I consult with various comrades. Once you have a clear classification, then you can work with a probability grid and fuzzy logic.

Let's take the classical case - the plane. The theory states that in order to produce a successful classification the data (in the case of the plane) must be linearly separable.

The plane is a classical example, not a classical case. And this simple separability is used in such examples purely to illustrate the idea.

It is necessary to gradually increase the dimensionality of the feature vector in order to construct a practically acceptable classification. In this case, the separation of classes will have to be non-linear.

Reason: