Neural network - page 10

 
Many people argue about the network looking for and learning rules, from a variety of input rules that will work for profit...... I have another question....Is it possible, or rather, how to teach the network to work by ready-made rules? ......Let's look at the channel system......It does not matter how or under what circumstances the channel is built. How to teach a network to work inside a channel or on its breakthrough?????? Well neural network minds......What can you suggest about this? Take at least a network without a teacher.....Rekur, for example. How to prepare inputs so that the network understands what is required of it? if it originally has a channel that changes over time........
 
nikelodeon >>:
Многие рассуждают о том чтобы сеть искала и учила правила, из многообразия правил входов, которые будут работать на прибыль......У меня другой вопрос....Возможноли, вернее, как научить сеть работать по готовым правилам?......Возмём канальную систему......Неважно как и при каких обстоятельствах строится канал. Как научить сеть работать внутрь канала или на его пробитии?????? Ну нейросетевые умы......что можете предложить по этому поводу?. Взять хотябы сеть без учителя.....Рекур, например. Как подготовить входы так, чтоб сеть поняла что от неё требуеться? если изначально имееться меняющийся со временем канал........

If we have ready-made rules, what the hell do we need a neural network for?!

The question is rhetorical, no need to answer.

 

By adding an input, in the form of a wrecker, you can increase the profitability of the system.....The network will calculate which breakthroughs are false or true, something like that.......

 
nikelodeon писал(а) >>

By adding an input, in the form of a wrecker, you can increase the profitability of the system.....The network will calculate which breakdowns are false which are true, something like that.......

You seem to overestimate the capabilities of neural networks.

To your post of 09.08.2009 21:00.

A network by itself cannot search for anything. It can only reveal some regularities (links between inputs and outputs; it is then possible to use the accumulated "knowledge" to calculate outputs in a situation on which the network has not been directly trained). That is, the search for patterns is only possible if and when they exist.

Now to your post of 09.08.2009 21:56.

If you know the regularities on channel penetration (your example) - why use a neural network when you can implement them more easily? Neural networks are a generalised approach; it's pointless to use a generalised one when a particular case is insanely simple. MAs can also be brought into the rulebook. There has to be a correlation between MA behaviour and channel breakdowns (and you have to be able to express that correlation). Neural networks don't work miracles by themselves.

p.s. So I'll answer your question: "Is it possible, or rather, how to teach a network to work with ready-made rules?" Yes, it is possible, if you have understanding of mathematics used in neural networks. By the way, a recurrent neural network is not the best example. Take something simpler (multilayer perceptron without feedbacks) - it's done there without any problems.

 
Good afternoon to everyone here on this thread. I would like to ask a question to people who know about NS. I'm a total loser. I have recently created in MQL-4 a backspreading network with one hidden layer. I feed it with delta quotes. I normalize network inputs and outputs of layers using sigmoid-0.5. I teach the direction of price movement until an error on a test sample stops decreasing. I have detected a couple of unpleasant things: 1. For some reason, the network gives only 2 output variants. Thus, it is impossible to distinguish weak signals from strong ones. Is this a peculiarity of the 3-layer network? Maybe we should increase the number of layers? 2. RMS error is the smaller the closer the upper and lower output thresholds are to each other. When the thresholds are equal, the reading is at least 0.22. Is this normal?
 
What are you teaching her? If you teach it two "output options" - up and down, as I understand it, then it will learn them. If you make multiple neurons at the output - i.e. increase the number of classes - e.g. strong, weak trades plus a flat, you can get a more detailed prediction. One hidden layer is enough. What are the thresholds? The bias weight is not set externally, but is trained in the same way as the other weights. If the activation function is sigmoidal, why do we need thresholds at all? Have you tried without thresholds?
 

see an example of which inputs are fed to a neural network. http://www.nnea.net/research/18-neural-network-forecast-indicator



 

I couldn't find any entrances there...

 
Burgunsky писал(а) >>
I have discovered a couple of unpleasant things: 1. The network for some reason gives out only 2 output options. Thus it seems impossible to separate weak signals from strong ones. Is this a peculiarity of the 3-layer network? Maybe it is necessary to increase the number of layers? 2. RMS error is the smaller the closer the upper and lower output thresholds are to each other. When the thresholds are equal, the reading is at least 0.22. Is this normal?

No, this is not a feature of 3-layer nets, the signal at the grid output is a continuous function. Maybe you already have some sort of classifier built in at the grid output which gives the final signal, e.g. if the grid output is greater than 0.5 then 1, less than 0, or if the grid output is greater than 0.5 then 1, less than -1.

About thresholds, I don't understand yet, you should describe in detail everything that you have in your algorithm and how it is constructed, then it will be possible to think of questions which you ask...

 
Swetten писал(а) >>

I can't find any entrances there...

Neither did I... We can only assume that it's the number of squares in which the price has fallen, maybe some other additional conditions...

Perhaps MeteQuotes did not see the decompiler's advertising...

Reason: