Stereo Neuro Net

 

In Avishka, if you squint your eyes properly and fall into a state of Nirvana, you can see how a 3-layer two-entry non-linear grid shovels the input data (the price series) trying to find hidden patterns in it. And, indeed, it finds it.

P.S. This should not be taken seriously.

Files:
3d_1.zip  1295 kb
 
Neutron >> :

In Avishka, if you squint your eyes properly and fall into a state of Nirvana, you can see how a 3-layer two-entry non-linear grid shovels the input data (the price series) trying to find hidden patterns in it. And, indeed, it finds it.

P.S. You should not take it seriously.

And before, as I remember now, in the small hall of the Oktyabr cinema, they used to give special glasses...

 
Are these just more cartoons from some neuro-package firm?
 
Nah, those are my cartoons. I invented them myself, or rather she (NS) figured out how to split the two input signals into Buy and Sell.
 
Neutron >> :
Nah, those are my cartoons. I invented them myself, or rather it (NS) figured out how to divide two input signals into Buy and Sell.

>> is it like input quantisation, like in SOM, or is it some other type of NS ?

 
Why two charts?
 

Well, that's for stereo. It's really a three-dimensional picture.

budimir писал(а) >>

>> is it like a quantization of the input data like in SOM, or is it some other type of NS?

It is a conventional triple layer perseptron with bias and nonlinearity in each neuron, fully retrained at each bar.
 
If it's a regular three-layer Perspectron, why should it have to be completely retrained on EVERY bar?
 

Can I ask you a question?

If it is possible, why not?

 

The possibility exists, but there are special types of NS, where it is necessary to conduct training at each bar, as for the NS type MLP,

There must be some criteria for MLP need to be fully retrained at EVERY bar,

and such a criterion - that there is such a possibility - is questionable.

 
By engaging in this dialogue, we are subconsciously solving different optimization problems (in the global sense). What approach you have chosen I can only guess. About mine I can say that at this stage of research I have enough computational power at my disposal not to limit myself by parameter "complexity of NS training". Obviously, there is no harm in retraining (additional training) of NS at each step. Thus, I can concentrate my attention on other interesting aspects of AI by lowering the dimensionality of the parameter space in the investigated domain by one. I think, in that sense, I am doing optimally.
Reason: