Market etiquette or good manners in a minefield - page 87

 
Neutron писал(а) >>
I put this task in the hands of NS, it decides, on the basis of analysis, as "short" history as possible (still history, what else can be used in TA?), to open in direction (H+) or against (H-).

Then there's not much left of the dissertation, is there?

And the perceptron, according to you, e. predicts ONLY direction.

 
M1kha1l >> :

Then there's not much left of the dissertation, is there?

And the perceptron, according to you, e. predict ONLY the direction.

So we are not interested in the thesis but in the profit... For the same reason, we don't need to know anything except the direction (the sign of the next reference point).

 

Aren't shepherd's patterns the same thing?

What difference does it make whether you analyse them with statistics or nets...

 
gpwr писал(а) >>

My network was given 300 training examples and the number of weights was 45. In the literature there is an opinion that if there are 5 times more training examples than weights, the network with 95% probability will be generalized. That is, my network must have good generalization according to the theory, but in fact it is not so. That is why I gave examples to confirm it. I think the point here is not to take more training examples. It's about the nature of the problem I'm forcing the network to solve. If you try to make the network predict the size of the next step of the price, then in training it will tend to such weights at which neurons operate in the linear area of the activation function in order to preserve the proportionality between the predicted step and the input past steps. That is, the task itself is linear. Given this state of affairs, adding hidden neurons will not improve anything. And the hidden layer itself becomes unnecessary. By experimenting with my network, I came to the conclusion that a single layer works as well as a double layer. I think, reading your previous posts in this thread, you've come to the same conclusion for EURUSD as well.

I think the network should be used for highly non-linear problems (like XOR or classification problems) where the neuron activation function can be chosen to be staggered.

If you not difficult, send me(.rar) sample on which you trained NS, sample about 10000 examples. Or code which will form it...

On first impressions in sample which you filed on network there is strong linear relationship between input and output - therefore network works as the linear solver...

//----

about sampling: there is a way to determine a sufficient sample to train, but a network (10-21-8-1) can be retrained with a sample of 50 000 or 100 000 samples...

so it's better to train with cross-validation...

 
paralocus писал(а) >>

It's not the thesis we're interested in, it's the profit... For the same reason we don't need to know anything except the direction (the sign of the next count).

And what timeframe do you think the forecast of candlestick direction makes sense? I'm digging in this very direction (I'm not using a neuronet), the results (probability of correct forecasts) are the following m1- 69%, m5- 61%, m15- 58%, m30- 56%, h1- 55%, h4- 51%, d1- 46%, w1- 51%, m1- 58%. Advisor based on this method drains at the rate of spread. :/

 
lea >> :

And on what timeframe do you think the candlestick direction prediction makes sense? I'm digging in this direction now (not using a neuronet); the results (probabilities of correct predictions) are as follows: m1- 69%, m5- 61%, m15- 58%, m30- 56%, h1- 55%, h4- 51%, d1- 46%, w1- 51%, m1- 58%. Advisor based on this method drains at the rate of spread. :/

None of them, in my opinion. >>Timeframes are out!

 
paralocus >> :

I don't think so. Timeframes, my ass!

After all, you have inherited your Hero's temperament! Take your time, the time will come and you'll write something like this:

Tics out!

It's just, all in good time...

>> good luck! :о)

 
grasn писал(а) >>

It's just, all in good time...

While my neural network is gathering statistics (it has crashed a couple of times due to unsteady convergence at large numbers of neurons in the hidden layer, so I had to re-introduce normalization of the count rate to the length of the weight vector), I will give my feedback on the applicability of NS.

Above I suggested the ability of the trained network to fill in statistical gaps in the training vector. This, in my opinion, allows to use NS effectively when there is a lack of training data. However, nature turned out to be even more interesting... It seems that the main specialization of NS lies in a slightly different area. Its job is to "output" a value (prediction) based on those values of the input data that didn't take part in the training. It's understandable, but think about it ... It's enough to have a few reference points in the NEEDED range of inputs (the range of values that the inputs take) to predict the expected value on the "slightly" wrong input data as reliably as possible. This is the main property of NS, and the key point here is the fact of continuity of the range of input data values. This is where the power of NS comes into full play.

What will happen if input data are discrete? Nothing in particular, NS will work too. But now we have an opportunity to gather statistics on all combinations of discrete values and make the same forecast as in the case of NS but without it. But we should not forget, that NS will do it much faster and more elegantly (if there are a lot of discrete values, but if not...). And, if discreteness of input values is low, or their only two (+/-1), then, as it turns out, NS is not needed! It's enough to gather statistics for each value, and nothing in nature will give a more accurate prediction than this.

For binary entries prediction there are much more efficient methods than NS. This doesn't diminish the merits of NS, but it's fascinating the way BP prediction is reduced to a binary prediction!

Figure shows the range of states that a binary two-entry NS accepts. The number of combinations that the input values can take is only 4. And at each of them we need to make a Buy/Sell decision. NS is not needed here! We need the trivial statistics. For 3 input NS, we get a 3-dimensional cube with 8 vertices, in each of which the same Buy/Sell, etc.

Once again, I'm not belittling the merits of NS. For example, if we forecast Daphnia population in the pond, which depends on a hundred or two factors (water acidity, temperature, etc.), we can't make reliable prediction without using NS if at least one or two of parameters will change by 1 % - probably we will get into the area where there is no statistics at all or it's not suitable for interpolation.

 
Neutron >> :
...

Sounds like a verdict.... as a "verdict", e.g. to Minsky, who proved serious limitations of perceptrons, and which in the pursuit of money is simply forgotten. And they also forget that even a multilayer network with nonlinear initialization gives absolutely no guarantees of correct classification, and they also forget ... (but this is as a joke, not the start of another argument). I confess I still do not understand what is the power of NS from your post, but my experience in building both my own and using specialized means of NS gives simple and clear answer - using perceptron gives no advantage at all, especially - on "bad" rows (your kags are very bad rows for prediction).


But good luck anyway. See you later, (I'll disappear for a couple of weeks).

 
If anyone is interested, there is a full potted history for futures available on FORTS. It is several years deep.
Reason: