Market etiquette or good manners in a minefield - page 18

 
paralocus >> :

Neutron, I also wanted to ask about Hebb's training (read by Wasserman). It seems that the formula for correction of weights is very simple there:

Wij(t+1) = Wij(t) + [OUTi(t) - OUTi(t-1)]*[OUTj(t) - OUTj(t-1)] and no gradient drops. Will it work?

Read for which networks and in which cases it is used.

 
HideYourRichess писал(а) >>

I have a fixed size. Also, if your win/loss amounts are distributed according to a normal law, then it is suspected that this corresponds to a fixed amount.

Now my losing and winning bribe sizes are equal too. For this I had to dig through all the TC, sharpening it to a fixed size of the tricks, but you can use all the power of the optimum MM, which in this case has an accurate analytical representation and, moreover, in the long run, when reinvesting funds, no other TS different from this will not give more profitability! To be fair, it should be noted that in general this statement is not true and a higher return strategy exists, but only for a trending market and a high degree of predictability(p>0.2), which for the Market is never fulfilled. This strategy would be to "lock in losses to let profits grow".

Fig. to the left is the familiar picture showing the logarithm of the optimum TS profit for different values of the trading leverage L. It combines the results of numerical simulation by the Monte Carlo method of operation of TS at quotes identical to market ones (EURUSD) taking into account the commission - Sp. The averaging is carried out for 200 independent trading sessions, each session has 1000 trades. The initial capital is taken as 1 (ln(1)=0), the whiskers show the typical scatter of the trading results on the level 1/e. Blue shows the result of analytical solution of the Basic Trading Equation:

...1.

By the way, Edward Thorpe's work "Kelly's Criterion in Blackjack, Sports and Stock Market" provides an analytical solution for the variance of the account balance at the end of trading and allows estimating the band in which our account is likely to end up after n-transactions. But, Thorpe made a mistake when deriving it and the result does not correspond to reality. I was able to get a similar relationship and the result is represented as lines of blue circles. You can see the excellent agreement with the results of numerical experiments. Here is an expression for the variance of the results of trades with reinvested funds:

............................................... 2.

Of course, for us, as traders, the main interest is the analysis of the risk of complete ruin. The figure on the right shows results of numerical simulation of the maximum deposit drawdown as a percentage of its current value (blue line). We can see that the bigger leverage we use, the more risky the account drawdown is. we can find the average value of these maximal drawdowns and the dispersion of the process (red data). Unfortunately, this characteristic of the trading process is hardly informative at all. The matter is that as time spent by a trader on the market increases, the number of performed transactions increases and, correspondingly, the risk of ruin increases. That is, the fact of bankruptcy is a matter of time! And no matter how careful tactics are used in trading, sooner or later we will be brought to zero! This is true. It is important to stop in time and take the cream off. Anyway the optimal MM guarantees the maximum growth rate of the deposit with the measured parameters of the TS (degree of predictability - p and the trade horizon - H). Yes, we will lose the deposit, but we will also start over and the overall growth rate of our welfare (taking into account possible losses) will be the highest possible in Nature!

Let me remind you that an optimal MM guarantees the maximum deposit growth rate only when the TS is positive MO or, similarly, when p>0 and I want to note that the deposit growth rate (the value inverse of the characteristic time of deposit doubling) at optimal values of leverage and trading horizon increases significantly with increasing credibility of the forecast p:

................................................................................... 3.

- as the fourth degree of the parameter. In such a situation it is very important to put the maximum efforts into development of such a TS that allows to obtain the highest possible prediction accuracy, and if it requires increasing the capacity of the NS (the number of neurons in the hidden layer), you should not spare energy and time, because the aim pays off. Well, the goal of TC optimization is to find the maximum functionality:

......................... 4.

It is searched for by searching just one parameter - the H-trading horizon, then the forecast reliability corresponding to it is calculated - p. The H-value found is considered to be optimal and is traded as long as the general market trend does not change. The market is monitored continuously. Fortunately, this is not resource intensive if an analytical solution is available.

It is shown that at reinvesting the funds the optimum TS is Bernoulli's TS, i.e. the TS where SL and TP of the order are equal and equal to Hopt found by maximization of the functional on trading results. Moreover, there is an optimal leverage Lopt providing maximum deposit growth rate such that any other MM will give a smaller profit on a long time interval:

......................................................................... 5.

At this point, the topic of an optimal MM when working with one instrument can be considered solved theoretically and finished in practice. The question of maximization of reliability of price increments prediction for the specified trading range Hopt remains unsolved. It is obvious that this task for the Neural Network with a re-learning block at every transaction.

 
Neutron >> :

You are now solving the problem of optimal input of the NS. Of course, you can just put all sorts of indices on the input, hoping that the grid will decide what's best for it... But it's better to think "what is the optimal TS in the market? Maybe you should predict its moments?

Read this work. Of course there are some glitches, but they are not principal:

In the process of reading Ezhov, I have a suspicion, that the induks, at least in that form, are not necessary at all! All these RSIs and Stochasticks are of no use::)

 

I have been saying this for a long time.

The fact is that a significant proportion of all indicators used in TA are somehow built using price series averaging. For example, the RSI contains the average of positive rate increments and the average of negative ones. It would be OK, but the inevitable FP that appears when trying to average BP, reduces all our efforts to nothing. And it is not accidental; it can be strictly shown that BP forecasting using smoothing is possible only for GR, readings of the first difference of which are positively correlated. For price-type BPs this condition is never satisfied! Hence the inevitably disappointing results. You cannot average or smooth price series for the forecast. You need other approaches of analysis. In particular, regression methods (if there is a model) or neural network methods (if there is no model).

The beauty of analytical solution I presented in previous post is that we've got explicitly function (4), maximization of which can be transferred to NS. Our task in this case is extremely simple - we need to make sure that the Internet does not fall off :-)

 
Neutron >> :

I have been talking about this for a long time.

The fact is that a significant proportion of all indicators used in TA are somehow built using price series averaging. For example, the RSI contains the average of positive rate increments and the average of negative ones. It would be OK, but the inevitable FP that appears when trying to average BP, reduces all our efforts to nothing. And it's not accidental, we can strictly show that BP forecasting using smoothing is possible only for GR, readings of the first difference of which are positively correlated. For price-type BPs this condition is never satisfied! Hence the inevitably disappointing results. You cannot average or smooth price series for the forecast. You need other approaches of analysis. In particular, regression methods (if there is a model) or neural network methods (if there is no model).

The beauty of the analytical solution I presented in the previous post is that we explicitly get the functional (4), the maximization of which can be transferred to NS. Our task in this case is extremely simple - we need to make sure that the internet doesn't crash :-)

Neutron, I think I'm beginning to understand something! I have a lot of questions and even a couple of ideas.

Turkeys to hell! Yesterday I did a funny experiment: I wanted to find out what is the ability of a perceptron to predict the increments.

The picture shows ONE !!! perceptron for 2 months after optimization. I'm shocked!



I have a lot of questions, I won't be able to write them all at once.

1. I influence the input signal by a hypertangent, and in order to even out its density of distribution, I first multiply the signal by the coefficient K > 1 (before hypertangent).

Most often, it is possible to obtain a fairly uniform distribution, i.e. we get the following function: F(t) = tn(K * Y(t)). I select K empirically, in a specially sharpened indicator. However it is not always possible. Usually, the density of the input signal hypertangent distribution, before multiplication of this signal by K, looks like this:



And after multiplying by K it looks like this:


That is, the input signal (its hypertangent) is sort of stretched over a range of +/-1.

But in the case of BP increments we get a signal that cannot be reduced to a uniform distribution.

Here is the signal before multiplication:


After multiplication : (to see this in my indicator is not always possible as the middle "disappears")



Since I've already seen that input whitening significantly affects the quality of learning and, as a result, predictability, I'd like to know if there's no other method than signal multiplication?

And if not, what to do?

 
Neutron >> :

The beauty of the analytical solution I gave in the previous post is that we have explicitly obtained the functional (4), the maximization of which can be transferred to NS. Our task in this case is very simple - it is necessary to make sure that the Internet doesn't fall off :-)

The main theme of this thread I have already had a chance to appreciate! -:) You're a genius and I'm not kidding!

I have an idea. Quite possibly a fresh one. I had a "short circuit" last night... on all levels of my personal neural network -:)

The point is that I have studied man all my life, not only in the context of his social and personal realization - for all this is "surface" - but as a holistic phenomenon of being and a "vessel of consciousness". Overnight, everything I had gathered over the years was now systematized (self-organized) from a structured collection of facts and assumptions into an integral whole.

I can't hide my delight! Oh, well... that was a digression.

The idea is simple:

To increase the robustness of NSs of any scale or purpose, you have to try to infect them... infect them. A virus is certainly fatal for deterministic logic of a turing machine - for NS and artificial intelligence, with proper, "dosed" application it may turn out to be just "live water". Now let's talk about it one by one:

1. All living organisms are the essence of neural networks. The statement may seem too bold, but it is a phenomenological fact.

2. All living organisms are placed in an aggressive environment for the purpose of learning - we call it evolution. We need only remember that along with the evolution of forms, there is a continuous evolution of the individual consciousnesses embodied in those forms. Consciousness itself is an effect of the complexity of the system(neural network), and its evolutionary "Planck" - :), I assume - is the ratio of the complexity of the system to the entropy of the system.

3. Systems whose entropy has fallen below a certain limit die out because they are incapable of further evolution; however, systems whose entropy has risen above a certain limit also self-destruct. Hence, the conclusion: in order for a system to evolve successfully, its entropy should periodically, for a certain period of time, reach the maximum allowable values in the given system. Such a state of affairs we call a "disease". By saying the word "disease" I mean it in rather a broad sense - a quite healthy-looking criminal is a sick man. Only it is not his body that is sick, but his mind and the pain he receives, mostly not in the form of fever or flu, but in the form of a so-called "heavy cross", "fate" and so on. However, this "social" pain they receive is one of the types of the evolutionary continuum's teaching influence - raising the creature's entropy to its hardly bearable limits. This raises a philosophical question about the teacher and his objectives... which, however, is far beyond the scope of our forum discussion -:)

4. those that survive - have developed immunity - in the broadest sense - i.e. not only against pathogenic germs and social, but more importantly for evolution - transactional external and transactional internal.

5. In any living system, there are such "micro-organisms" that will surely kill it if the immunity is weakened enough. Why did nature do that? For to increase the ability of the same system to resist the factors of the environment, i.e. to have more opportunities (time) for continuation of individual evolution by constant internal "training" of the system for survival.

6. Let us assume that the task of an evolving system is to develop immunity (in all senses). Then an interesting thing turns out: the number of inputs of living NS as well as the number of outputs (even less) is ridiculously small in comparison with the number of their neurons and connections! I.e. we sharply increase the number of neurons in the intermediate layer (if there are three layers - input, hidden, and output), and now we can try to "infect" the NS. This can be done by introducing a metered randomized error during correction of weights! And going a little further, alternative training of the NS by increasing or decreasing the frequency or amplitude of this randomised error is possible.

For example, before correction of weights we could try to add a small error to the corrector with a function that (randomly) once every 1000 calls to it would return a random value from a certain range (e.g. +0.01 / -0.01 ). It is not known when or which neuron will get a small erroneous increment. The more often such increments occur - the higher the entropy of the system. In this case the NS will have to take into account... its own error!

Such thoughts...

 
Neutron >> :

This is all great except for one small thing. You have an error in the original formula. The point is that the expression 1+(L*(sigma*H-Sp)/S) is not equivalent to the capital gain per trade, which is what you are trying to use this expression instead. Frankly, I don't understand the basis on which you took it to be "obvious". That's the first point. Second, the formula has to be different for different currency pairs. There are only three versions of the formula: for pairs with "direct quotes", "inverse quotes" and "cross rates". For "straight quote" pairs, for example, the size of the gain, as a fraction of the total capital, can be calculated as follows: (TakeProfit-Spread)*size_one_lot*number_of_lots/deposit. Correspondingly, to find the gain ratio, add 1 to the formula. The expression "size_one_lot*number_lots" is the volume of money involved in the transaction, taking the leverage into account. More generally, for a direct quote, there was a formula among the articles: financial result = (Selling price - buying price) * number of lots * size of a lot - commission * number of lots ± bank interest. In this formula, the spread is directly factored into the prices.

 
HideYourRichess писал(а) >>

It's all great, except for one small thing. You have an error in the original formula. The point is that 1+(L*(sigma*H-Sp)/S) is not equivalent to the capital gain per trade, which is what you are trying to use.

Thank you, HideYourRichess, for taking the trouble to check the calculations. I know how painful it is to check someone else's calculations. Of course, I do not exclude errors in formulas and assumptions from which they are derived, so I try to check the results of the analytical solution by numerical experiment. In our case we have modeled the process of discrete price increment with the equal increment step H points. Besides, there was a fixed dependency of the expected increment on the previous increment: p= sum of all consecutive increments divided by the doubled number of all movements. For a real market quotient you can display a similar breakdown and find the corresponding coefficient p.

So results of numerical modeling perfectly coincide with results of analytical solution obtained by me (see left fig. above). Consequently, there is no error in this formulation of the problem and its analytical solution! It is possible to argue about the correspondence of the model to reality, but there is no problem here - I can always implement this partitioning on a quotient and find p.

paralocus wrote >>

I've already managed to appreciate the main topic of this thread!

Thanks for the kind words, but what can be special about knowing how to take the derivatives and find the extremum of a function? Many people simply do not want to be engaged in a detailed analysis, it is easier to rush straight to the quarry.

I'll think carefully about what you wrote above later.

 
Neutron >> :

Thank you for your kind words...


Thank you!

Here's another one:

1. Another option to infect the system is to introduce an additional random input into a neuron or group of neurons - an organ.

2. "Organ" can be represented as a specialized group of neurons having one universal feedback - i.e. each organ neuron "knows" what is at the output of any other neuron of its group( organ or family), and each group is aware of what is at the output of the organism. Such a NS will be capable of dynamic self-adaptation and the need for learning will be one of its dominants - i.e., the system can purposefully and self-motivatedly seek and generalize the knowledge it needs. Our task will be to build obstacles for it and scatter bits of knowledge here and there -:)

 
Neutron >> :


So, the results of numerical simulation perfectly coincide with the results of analytical solution obtained by me (see left fig. above). Consequently, there is no error in this formulation of the problem and the analytical solution obtained! It is possible to argue about the correspondence of the model to reality, but here there is no problem at all - I can always implement this partitioning on a quotient and there are no problems with finding p.


There's a bithere about leverage and some of the "tricks" associated with it. This is a simulation on a trading server emulator.

Reason: