Neural networks. Questions from the experts. - page 13

 
joo:

Enter a third type of signal. Total signals:

0 or 1 or 2

Okay. This is not a problem. But how will PNN handle this value?

Because the interval [0;1] - identifies possible probability values, and how does 2 fit in here? I can't figure it out...

 
lasso:

Okay. That's not a problem. But how will PNN handle this value?

After all, the interval [0;1] identifies possible probability values, and how does 2 fit in here? I can't figure it out...

I can't get it, what's the problem? Use sigmoid in range [-1;1], 0 will correspond to no signal. And then there are 3 types of "clean" signal.
 
joo:
I can't get it, what's the problem? Use sigmoid in the range [-1;1], 0 will correspond to no signal. And then there are 3 types of "clean" signal.

I also considered this option, but in that case 0 is in the middle of the range, i.e. corresponds to a probability of 0.5

Ah, the probability of occurrence of an event equal to 0.5 and no information about the occurrence of the event, I think it is quite different things. This is where the problem lies (

 

Then you have to have three types of events. Each has a range of [0;1] (or whatever is more convenient), and consider the probability of each event.

Let me say at once - this is a dead-end direction. It is impossible to describe the probability of this or that event and still teach the net this probability. Suppose a network gives an output probability of an event of 90%. So what if the event does not occur? Then the network is wrong, but why should it be wrong if there is still 10% probability left? You just can't provide an adequate training set, that's all.

 

Yes, thank you, that sounds about right. I'll have to think about it.

 
joo:

Then you have to have three types of events. Each has a range of [0;1] (or other, whichever is more convenient), and consider the probability of each event.

Let me say at once - this is a dead end. It is impossible to describe the probability of an event and still teach the network that probability. Suppose a network outputs a 90% probability of an event. So what if the event does not occur? Then the network is wrong, but why should it be wrong if there is still 10% probability left? You just can't provide an adequate training set, that's all.

In classification problems in the output layer it is best to use SOFTMAX instead of sigmoid as the activation function. In this case each output neuron corresponds to some class, and its outputs give the probability of membership in the corresponding class. The sum of outputs per layer equals 1, as it should be.

lasso, you can read about the activation functions, including SOFTMAX, here, page 22

 
joo:

Let me say at once - this is a deadlocked direction. It is impossible to describe the probability of one event or another and still teach the network that probability. Let's say the network outputs a 90% probability of an event. So what if the event does not occur? Then the network is wrong, but why should it be wrong if there is still 10% probability left? You just can't provide an adequate training set, that's all.

Dead-end direction - what? To use probabilistic NS in trading or my description of this training kit?

I hope it's the latter ))

And in general, what training set can be called adequate?

For example, we input PNN with three values from Oscillator in the interval [-1; 1] from three different periods and compare it with the output of 0.70 (the price went only 35 pips from the expected movement of 50 pips).

Is this an adequate training set?

 
alsu:

In classification tasks in the output layer it is best to use SOFTMAX as the activation function instead of sigmoid. In this case each output neuron corresponds to one of the classes, and their outputs give the probability of belonging to the corresponding class. The sum of outputs per layer equals 1, as it should be.

Better or not, is up to you. Doesn't make any difference. It will depend on what the lasso needs. If desired, and outputs/inputs can be represented as a sum equal to 1 layer neurons, while using a sigmoid. But the problem will remain the same - inability to provide an adequate training set.
 
lasso:

Dead-end direction - what? To use probabilistic NS in trading or my description of this training kit?

I hope it's the latter ))

It is a dead-end direction to determine the probability of a particular trading event.

lasso:

And in general, what training set can be called adequate?

For example, we input PNN with three values from Oscillator in interval [-1; 1] from three different periods and to it we compare the output with 0.70 (the price went only 35 pips from the expected 50 pips movement).

Is this an adequate training set?

What information does the 0.7 figure carry about the probability of an (already arrived at) event? None. Therefore, the result will also be - none.

PNN can be used as a classification of certain conditions and/or belonging of a figure to a certain pattern, but one cannot use it as a tool for determination of the probability of some event's outcome. Or rather it can be used, but the value of the found probability won't be useful (I wrote above why).

 
lasso:

Okay. That's not a problem. But how will PNN handle this value?

After all, the interval [0;1] - identifies possible values of probabilities, and how does 2 fit in here? I can't figure it out...


There are actually two options:

1. binary coding of inputs (1 input/1 event). 0 - event didn't happen, 1 - it happened.

2. Extending the set of values for each input (as you have already been told: 0, 1, 2...). There is no problem with range [0;1] here, you will get probabilities at the output of the network, while the input does not necessarily have to have probabilities. If you don't believe - there is another way: divide the interval [0;1] into the necessary number of parts (0 - event didn't occur, 0.5 - no observation, 1 - event occurred).

Reason: