Market etiquette or good manners in a minefield - page 90

 
paralocus писал(а) >>

1) Was Mogit in a hurry? He who laughs last, the cowboys say...

2) There doesn't seem to be a big DB here. At most a hundred and a half patterns, and probably even less. 3) You must have been running your system by candlelight...

1) Yes it can, I'm not arguing. :) It's good for your health :)

2) What are the criteria for classifying patterns for the base? Key question, actually.

3) I was chasing by candlestick chains, by indices in different variants, by combinations of candlestick indices, etc.

It is quite manageable in my scheme. By the way, you didn't seem to like it. Why's that? Well, here's the answer.

right hand on your left heart, why do you need a base if it's mostly actual crap?

;)

 
Neutron писал(а) >>

It is like a crystal, which has no surface, but there are nodes of its crystal lattice (to put it figuratively), the coordinates of these nodes are known to us in advance and we do not need to spend our resources on identifying the surface... We don't need the network.

And now change the problem a bit, like backwards. We have a pattern from the market, we need to find the nearest node.

(variant: several nearest ones, then fuzzy them). Isn't it easier-cheaper?

 
MetaDriver писал(а) >>

3) Ran through candlestick chains, turndowns in different variations, combinations of candlestick turndowns, etc.

It's quite manageable in my scheme.

Oh, I almost forgot! I also ran a second pass (Level 2 forecast) on Level 1 forecast patterns.

In the database scheme this is equivalent to building a level 2 base with number of elements n! At the end of my previous

at the end of the previous sentence is not an exclamation mark, it's a factorial sign. So if you want to repeat the feat,

I recommend that you stock up on a bigger propeller....... ;))

 
Let us look again at the feature space of two-entry NS:

The red dots correspond to the combinations on the inputs:

00

01

10

11

The statistics will answer the question: "what should we do (Buy/Sell) if the next combination of inputs is so-so...". The NS is not needed here! - The analysis of the entrance unambiguously connects it with one of the square tops, and the statistics with some accuracy, will give us the expected direction of the position. No cunning NS can give more reliable information than that which we already have.

The important question is the estimation of the minimum database for binary patterns. Here we can act simply - numerically simulate the situation of "smearing" of the input data on the NS inputs and estimate the variance of fluctuations for a random BP. For the certainty let us consider as acceptable for statanalysis the case when the variance of the pattern identification accuracy is commensurable with the expected amplitude (number) of patterns of a given type. For example, when d=2 only four types of patterns are possible (above), then the length of the "training" sample (sufficient for reliable identification, is determined from the condition of equality of the statistical amplitude and the average number of patterns of a given type:

The figure on the left shows statistics for CB of two-entry statistical unit (BS), on the right - five-entry (2^5=32 unique combinations of patterns of type 10001 or 01101). Learning sample length P=10*2^d, i.e. for the first case P=40 samples, for the second P=320.

Not fatal, and no learning-retraining!

P.S. I have a strong intuitive feeling that the optimal length of training vector P for NS looks like this: P=w*2^d, where w is the number of weights of NS, d is the dimensionality of input.

Where is Mathemat?!

 
Neutron писал(а) >>

1) There's no need for the Ns! - The analysis of an entry will unambiguously bind it to one of the vertices of the square, and the statistics, with a known accuracy, will give us the expected direction of the position. No cunning NS can give more reliable information than that which we already have.

2) Not deadly, and no learning-over-learning!

3) Where is Mathemat!

1) Accuracy is known for sampling from the past. For sampling from the future...... // finish it yourself

2) Agreed. No use either.

3) :)) :)) :))

All logic is built on the bosom of the following assumptions:

1. History will ALSO repeat itself.

2. Fluctuations can be neglected in the hope that they will be disproportionate to the statistical expectation.

3. The initial receptors under discussion (input info) are better than others.

Not very reliable assumptions, really. I'm afraid Mathemat won't approve..... :)

 
MetaDriver писал(а) >>...

Are you having verbal diarrhea today? Well, you, go to a specialised place instead of ranting in this thread.

If I am wrong about you, excuse me and tell me what assumptions you use for analysis. Well, if it isn't America you have discovered, Columbus.

P.S. Wipe up some of your unnecessary posts above. You'll see for yourself which ones.

 

I want to check a little, to be sure that I understand you correctly, because with my humanitarian tendencies - you know...

So for two-input NS we have only 4 possible combinations of binary signals at the input (for 4-input - 16 respectively) - the signals are signs of the first PT differences. Here it is clear that whether there is a NS or a super-duper-hyper NS, it can not analyze anything but the above combinations of the input signal, and therefore instead of looking for complex non-linear relationships in the "analog" signal, it is enough to collect statistics of dependence of the following RT readings on combinations of input signals.

If this is the case, then the question of database size is not very difficult: the maximum possible size is 2^d and obviously this is at least half the real size required, for I suspect that not all of the possible combinations of input signals at a given d will be of some value.

Have I got it all right? If yes, I will continue.

 
Yes!
 
Neutron писал(а) >>

1) Are you having verbal diarrhea today? Well, you, go to a specialised place instead of verbally abusing yourself in this thread.

2) If I'm wrong about you, excuse me and tell me what assumptions you use for analysis.

3) Well, haven't you discovered America, Columbus?

4) P.S. Wipe up some of your unnecessary posts above. You'll see for yourself which ones.

1) :) I got some.

2) I try to keep it to a minimum. Though I have some. Let me try to reflect:

1. Linear, polynomial, elastic (periodic) and other elementary

regularities on the market. None.

2. Neural networks are, however, useful, as they are able to capture the current poppy trade patterns.

3. Revealed regularities are not long-lived and fade out gradually. 4.

4. At the input of NS it is better to have a variety of receptors.

5. Multicurrency basket play is twice as reliable as portfolio play.

6-7-8....poise.

3) Not. Discovered a dead end at the end of this (discussed) tunnel. Also a useful result, if you understand it.

4) Yes, thanks, removed. Didn't notice the message flying off twice.

 

Then it is not the size of DB, but statistically reasonable size of entry d and the value of threshold H! I noticed that for correct PT series (based on ticks) regularities working on candlesticks do not work. For example there is almost no sense to give more than 8 entries on the grid, because market changes during formation of 10 or more RT samples, on the candlesticks 24 entries (including one) give the best result - obviously associated with daily activity.

There is a suspicion that in the end the whole DB will fit into a dozen - another pattern

Reason: