Discussion of article "Neural Networks Made Easy" - page 4

 
govich:

If you want to understand the structure of MLP, IMHO it is better to look at this code...

Thanks for the code, but I'm still at the level of theory. The code is the next step.
 
Реter Konow:
Appreciation. The article and links helped me to understand the purpose of neural networks - finding and processing an invariant within the variation space of a data set, and the simplest method of technical implementation, which I have yet to fully understand. But, the explanations are very lucid .

And I like the gist of it better:


 
Denis Kirichenko:

And I like the gist of it better:

You mean you have to know the forecast in advance?

Well...

 
Denis Kirichenko:

And I like the gist of it better:


Next Step:)

Reconstructing the programme from the data and the result of processing...

This is the prerogative of man today.
 
Good article, the first one where error backpropagation is implemented .... I would like to see an example for different network architectures
 
Dmitriy Gizlyk:

The 5 Why technique is built on sequential questions where each question answers the reason for the previous question. For example, we look at a chart and a rising price chart and and and construct questions (the questions are answered in the abstract to explain the technique):
1. Where to trade - Buy
2. Why buy? - Because it is a rising trend
3. Why a rising trend? - MA50 is rising
4. Why is the MA50 rising? - The average closing price of 50 candles with a shift of 1 is lower than the average closing price of the last 50 candles.

etc.
Since the questions are sequential and have a cause-and-effect relationship, we create layers to follow this relationship. If we use only 2 layers, the cause and effect relationship is lost, the neural network analyses a number of independent options and chooses the best one.

Method 5 is similar to the decision tree, also the area of search for a solution/cause is narrowed down step by step. Still it is not clear how it is connected with the fact that 4 layers are used in the NS in such a structure. I understand the structure would be tricky, e.g. the second layer is fed with summed output of the first layer and unchanged input signal, etc.

Can you tell me, have you relied on any other work that used such a basis for choosing the number of layers? Or is this your know-how?

s.s. The work is good, thanks.

 
Dmitriy Gizlyk:

The 5 Why technique is built on sequential questions where each question answers the reason for the previous question. For example, we look at a chart and a rising price chart and and and construct questions (the questions are answered in the abstract to explain the technique):
1. Where to trade - Buy
2. Why buy? - Because it is a rising trend
3. Why a rising trend? - MA50 is rising
4. Why is the MA50 rising? - The average closing price of 50 candles with a shift of 1 is lower than the average closing price of the last 50 candles.

etc.
Since the questions are sequential and have a cause-and-effect relationship, we create layers to follow this relationship. If we use only 2 layers, the cause and effect relationship is lost, the neural network analyses a number of independent options and chooses the best one.

Rare heresy. How do your questions and the number of layers relate?! You should read up on neural network theory.

The four-layer neural network you've built can't learn anything. There's such a thing as error back propagation degradation. With two layers everything is fine, but with 4 layers the error simply does not pass all the layers back. To solve this problem, deep neural networks of various architectures have appeared.

It is probably useful as an exercise in programming and as an example of programming. But there will be no benefit from practical application. IMHO

Good luck

 
Please give an example ... it's not very clear how to use it
 

There is plenty of information on this type of network (and many other types) on this site, both articles and programmes in the codebase. You just need to search for it. For example, articles: Neural Network Recipes, Neural Network Self-Optimising Expert Advisor; codebase - classes for MLP (see several other types of networks by the same author, and not by him alone), indicator with neural network prediction (with a brief description of the theory), etc. There is enough information to understand how it works. But for real use, one source is not enough - you need to read a lot of materials and take into account a lot of nuances.

 
Stanislav Korotky:

There is plenty of information on this type of network (and many other types) on this site, both articles and programmes in the codebase. You just need to search for it. For example, articles: Neural Network Recipes, Neural Network Self-Optimising Expert Advisor; codebase - classes for MLP (see several other types of networks by the same author, and not by him alone), indicator with neural network prediction (with a brief description of the theory), etc. There is enough information to understand how it works. But for real use, one source is not enough - you need to read a lot of materials and take into account a lot of nuances.

you don't understand, I'm familiar with neural networks.

I need a code example for a simple architecture (for example, two neurons at the input, 3 in the inner hidden layer, one at the output) and the result of its work (both in terms of profit - graph, and training time and on specific hardware ) ....

I want to understand simply: whether this scheme will really work with sufficient accuracy (profitability) and reasonable time costs for its training ...

I should add that to develop classes and functions but not to give an example of an expert on these classes is not a complete article ..... it's just a theory, which is plentiful on the Internet.