Market etiquette or good manners in a minefield - page 14

 
Neutron >> :

This is the biggest mistake of all. The market will punish you for your insolence!

You can only play with the Market, and by its rules. And if it likes the game, it will reward you.

Are you attributing personal qualities to the market, and with a capital letter? :)

The market is a wilderness, the analogy with the ocean seems quite apt to me.

 

I was joking:-)

I know that the market is only a random and somewhere quasi-stationary process. And the task of TS comes down to finding and exploiting some semblance of stationarity.

sealdo писал(а) >>

I wonder if there is a way to calculate targets (TP, SL) for example from the volatility of "days", "weeks", or otherwise, so that they may change smoothly with the market.

These values characterise the market and can only be evaluated by running the TS on history.
 
Neutron >> :

I was joking:-)


There's a lot of serious dudes here! Just kidding... >>:)

Neutron, I have some questions about ORO. Do I correctly understand, that when an error value is input to a neuron, it divides this error between all weights of its inputs, according to their share in the total sum of these weights?

That is, we take the error value +/- Er, then sum up all the weights of incoming synapses Sw1 = SUM(w1+w2+w3+... wn) and calculate the proportion (in total weight) of each of them

Dw1 = w1/(Sw1/100) and then distribute +/-Er between all weights according to their proportion in the total weight: wi = wi +/- Dwi *Er is it so? If yes, is it possible to make this distribution not linear, but say exponential? After all, in living systems "bonuses" and "slaps", i.e. errors and bonuses are not distributed among "first" and "last" at all linearly.

Second question:

Where does the error value come from? What is the sign of this value? Can the value of the previous transaction's loss be an "error value"?

 
paralocus >> :

Do I correctly understand that when an error value is input to a neuron, it divides this error among all weights of its inputs, according to their share in the total sum of these weights?

No, it does not divide it - it distributes it. And it does it so not because someone wants it so, but as a result of the minimization of the objective function.

In light of this --

If so, is it possible to make this distribution not linear, but, say, exponential? After all, in living systems "bonuses" and "slaps", i.e. errors and bonuses are distributed among "first" and "last" not linearly at all.

This assumption is likely to lead to slower learning or a discrepancy in total error.

 
TheXpert >> :

No, it doesn't - it propagates. And it does so not because someone wants it to, but as a result of the minimisation of the target function.

In light of this --

This assumption is likely to lead to slower learning or a discrepancy in the total error.

Will predictability increase?

 
paralocus писал(а) >>

Neutron, I have questions about ORO. Do I correctly understand that when an error value is input to a neuron, it divides this error between all weights of its inputs, according to their share in the total sum of these weights?

That is, we take the value of the error +/- Er and then sum up all the weights of input synapses Sw1 = SUM(w1+w2+w3+... wn) and calculate the share (in total weight) of each of them

Dw1 = w1/(Sw1/100) then distribute +/-Er between all weights accordingly to their share in total weight: wi= wi +/- Dwi *Er is it true? If yes, is it possible to make this distribution not linear, but say exponential? After all, in living systems "bonuses" and "slaps", i.e. errors and bonuses are not distributed among "first" and "last" at all linearly.

Second question:

Where does the error value come from in the first place? What is the sign of this value? Can the value of a loss in a previous transaction serve as the "error value"?

No, it's not like that.

Let's consider not a grid committee like yours, but a regular two-layer grid. Then you can generalise.

You have a vector of input signals (let it be one-dimensional) of length n samples, and let n+1 samples be a check for the quality of training of the grid. You give it this vector(n samples), equating all weights to random values in range +/-1 with uniform probability density distribution, and see the output of the grid. Suppose you set +5.1, and check n+1 counts (the value the trained grid on the training vector should aspire to) +1.1. Then you take the difference between the obtained value and the desired +4 and add this value, keeping its sign to each weight of the output neuron (if it is without FA), or find the derivative of FA from this value and add it to the weights of the input neuron (if it has FA). And so on.

If you digest this piece, I'll tell you how to push further the error to the input weights of the first (input) layer.

 
Neutron >> :

No, it's not.

Wouldn't you rather do it in formulas? The maths here is not complicated.

paralocus >> :

Will predictability increase?

Unlikely.

 
TheXpert >> :

No, it doesn't - it propagates. And it does so not because someone wants it to, but as a result of the minimisation of the target function.

In light of this --

This assumption is likely to lead to slower learning or a discrepancy in the total error.

And why should there be a discrepancy in the total error if Er = e(w1*x) + e(w2*x) + ... e(wn*x)? No, the cumulative error would be equal to the incoming error.

 
TheXpert писал(а) >>

Wouldn't you rather do it in formulas? The maths here is not complicated.

You can look up the formulas yourself in the literature, of which there is plenty on the Internet.

Let's not be too hasty. And do not try to complicate your life with all sorts of contrivances, such as "nonlinear learning" and the like, it's from the evil one. Beauty and reliability in simplicity and harmony!

 
Neutron >> :

If you digest this piece, I'll tell you how to push the error to the input weights of the first (input) layer next.

Going to brew...

Reason: