Market etiquette or good manners in a minefield - page 41

 

I've been spoiling comparing the accuracy of single layer and double layer neuronics forecasts on Eurobucks watches, and I see that the single layer is noticeably more efficient. I think this is due to the fact that there are no "tricky" non-linear dependencies between the bars in the market. Everything here is as simple as a crowbar and the dependencies are the most linear, which is what the single-layer works out. By the way, essentially, the architecture of a single neuron is analogous to linear AR-model of the n-th order, where n is a number of inputs of NS, and the fact that the double layer does not find anything new between bars, says about uselessness of using non-linear AR-models in this, concrete case.

paralocus писал(а) >>

While I don't have absolute confidence in its correctness - try your neuron on my data - it's in the trailer together with the girl, and if you have time and desire - check the girl on your data.

You download me your file with EURUSD 1h and save your Matkad files in 11 format, otherwise I can't read you again.

 
paralocus писал(а) >>

Sorry, of course, but I've been having trouble getting the hint lately. Maybe it's because I've been over-sitting at the computer... What is this "something" you're writing about? At least give me an example.

I once fed stochastic(0) at the moment of a new bar formation, among other things. Or you can use MA(0) with PRICE_TYPICAL as input. By definition, it already contains the close price. That is, you give the network a "hint" and it should only take hold of it. If it picks up the hint, you'll know, that the training algorithm works.

 
Neutron >> :

I've been spoiling comparing the accuracy of single layer and double layer neuronics on Eurobucks watches, and I see that the single layer is noticeably more efficient. I think this is due to the fact that there are no "tricky" non-linear dependencies between the bars in the market. Everything here is as simple as a crowbar and the dependencies are the most linear, which is what the single-layer works out. By the way, essentially, the architecture of a single neuron is analog of the linear AR-model of the n-th order, where n is a number of inputs of NS, and the fact that the double layer does not find anything new between bars, says about uselessness of using non-linear AR-models in this, concrete case.

Just send me your file with EURUSD 1h and save your Matkad files in 11 format, otherwise I can't read you again.

I had that suspicion, but I've courageously rejected it :-) Sorry about the format - I didn't see it properly. By the way, I'm trying to manipulate the learning speed now (Greek Eta) - the results for AUDUSD improved significantly introducing this 15-20 - I got more than 4.5 returns. But this action had no effect on the Eurobucks.

Files:
nero2_11.rar  222 kb
 
YDzh >> :

I once fed a stochastic(0) to the input when a new bar was formed, among other things. Or you can use MA(0) with PRICE_TYPICAL as input. By definition, it already contains the close price. That is, you give the network a "hint" and it should only take hold of it. If it picks up the hint, you'll know, that the training algorithm works.

So wouldn't it be easier to just feed a zero bar(unfinished close) to the input? But how do you see the results? Tester won't help here, and neither will the numerical modelling, which Sergei here teaches me.

 

Warmed it up to 100... wonders, though!



 
Try increasing the statistics by a factor of two.
 

This is a great teaching method! The main thing is to understand how to use it properly.


Remember my "fantasies" about entropy and all that? So that's exactly what you've done, you just need to abandon the initial initialization of the scales and raise the girl's temperature, and then cool her down gradually. And, the question is, why do we need that double layer?

It would be nice to think about simultaneous optimization of three parameters: input dimensionality, number of epochs, and initial temperature. All three parameters are critical, i.e. change of any of them even by one (temperature by ten) gives a completely different result.

 
paralocus писал(а) >>

All three parameters are critical, i.e. changing any one of them even by one unit (temperature by a tenth) gives a completely different result.

In general, this may indicate a poor learning capability of the NS. Think about it, search of global minimum on surface should be provided from almost any starting point. And you don't meet this condition (sensitivity to initial randomization of weights). This is a bell.

We need to figure it out until we understand it.

 
Where should I even look? I'll try to save the learning results between epochs. Yes and what are the results on my data?
 
paralocus писал(а) >>
Where to look for it?

Good question! I don't know. Thankfully Matcad allows you to visualise the counting process at any step. Experiment.

I'm currently dabbling with my bilayer - I'm looking at the dependence of learning efficiency on k. It's pretty resource-intensive, so, I'm not running your solution on my account yet.

Reason: