Neural networks, how to master them, where to start? - page 8

 
Mathemat писал(а) >>
There seems to be some connection with the theorem proved by Kolmogorov that any function of any number of variables can be exactly expressed in terms of functions of two variables. I may not have expressed it accurately, but it is often mentioned in articles on NS.

Yes, yes. The roots of the problem go back to '58.

 
Neutron >> :

I don't use any TF.

The reason is that candlesticks are used in all timeframes, so BPs based on open (or close) prices are closer to integrated random BPs than BPs based on the original one using other methods (e.g. instrumental time, etc.).

What instrumental time are we talking about?

 
Neutron >> :

It's complicated.

I know that two theorems were proved not too long ago. According to the first one, a three-layer nonlinear NS (the one consisting of three layers of neurons, with nonlinearity at the output of each) is a universal approximator and further increasing the number of layers does not add power to the network. According to the second theorem, the computational power of the network does not depend on the specific type of non-linearity at the outputs of its neurons. It is important that there is non-linearity in principle and it does not matter what exactly it is - sigmoid or arctangent. This saves us from trying to find the best of equal.

OK, I get it. The phrase "network power" was confusing.


It does not follow in any way from these two theorems that 4 layers are not more efficient than 3.

All it says is that where 4 layers are trained, 3 can be trained. It doesn't say anything about training efficiency.


Let's say it's always possible to train 3 layers first. And then you can try to improve the efficiency with other architectures, in particular by increasing the layers.


About non-linearity. All cool, but still no matter what the evidence, each FA has its merits and demerits. That's why I came up with mine. Although it does have them too.

The proof, I am sure, does not say anything about efficiency.


Therefore, I think the discussion is over.

 
registred писал(а) >>

What instrumental time are we talking about?

This is when the candles are not built by time, but by piece (see forum search).

TheXpert wrote >>

Let's say you can always learn 3 layers first. And then you can try to improve efficiency with other architectures, in particular by increasing the layers.

That's true, but you will have to pay a super price for extra layer (not crucial at all) - complexity of learning grows like P^3!

 
Neutron писал(а) >>

That's true, but you have to pay a super price for an extra layer (not fundamental at all) - the complexity of learning grows like P^3!

And this will of course lead to alternative network results, but whether they will be better is a question. If a network has learned to earn for 3 layers, the fourth one can do a lot of harm.....So the choice of architecture is not quite the same as increasing the layering. The architecture is the type of network and each type can have multiple layers.And the quality of learning is strongly influenced by the quality of input, so two layer networks also work perfectly.....

 

I completely agree.

 
Integer >> :

Don't bother programming neural networks yourself, there are ready-made programs. The only program for which there is a book in Russian is Statistica Neural Networks, and this book gives the impression that it is written by experts in neural networks, and has a pretty decent introduction and overview of neural networking techniques and existing types of neural networks. The program allows to export trained networks as a dll which can be used in MT Expert Advisors (haven't tried it myself though, sorry if I'm wrong). The specialised trader programs with non-networks are not so easy to attach to MT, and if it is possible, they are crooked or askew, or very expensive. There are broker terminals that export data to meta files, it is not so easy to implement specialized software for working with non-growing networks. Eh! Why don't MT developers provide the possibility to export data to be able to use any other programs of the market without unnecessary changes.

I totally disagree... ready-made products are like 3in1 shampoo... a little shampoo, a little conditioner, a little balm...

But neither in normal quality...


The network is very much influenced by many factors that are unknown to how they are implemented in this program...

The development of your own implementation increases your understanding of networking and creates a very flexible tool...


How can I change the network error calculation in training? Standard RMS to something else?

You can't... You have to change the algorithm... and it is not available...

And there is no way to correct errors... imagine that after a year of trying you will find out that there is an error in the program that cannot be corrected ?

I think that would be cool... :)


It's indifferent for students or people NON-PROFESSIONAL, i.e. not making money doing neural networks...


When it comes to money, you need to control the whole process from beginning to end...


So my opinion is that if possible you should do everything yourself, or at least use tools that you can control...

 
Solver.it писал(а) >>

Imagine that after a year of trying, you find out there's a bug in the software that can't be fixed ?

I think it would be cool... :)

So you need to use popular programs such as neroshel nerosolushin etc to read opinions and stuff like that. But if your program is not known zilipupkin, then of course yes use it with caution.

 
Solver.it писал(а) >>
What about the pros? What do the pros trade on? How do they trade?
 
Solver.it писал(а) >>

Totally disagree...

If you were doing 3D graphics and animation, would you be writing your own 3DStudioMAX?

Reason: