What to feed to the input of the neural network? Your ideas... - page 78

 
Maxim Dmitrievsky #:
Well, you were shouting in the comments, I thought it caused a storm of indignation in you )))
Resentment - yes. It's a fact

The man postulates how it "should".

And I'm trying to make the point that there's no point in postulating "how it should" if it "doesn't work".

"Should" is a textbook translation, not forex research.
 
Ivan Butko #:
Outrage, yes. That's a fact

The man is postulating how "should".

And the point I'm trying to make is that there's no point in postulating "should" if it "doesn't work".

"Should" is a textbook translation, not forex research
I think he wrote there that the method is even worse than conventional statistics. One can agree with that. If stat methods can be made better and clearer, then why such a study that is not explained in any way and therefore rubbish.
 
Ivan Butko #:
But it looks like postulates .


You can't get anywhere without postulates.


For example, Einstein took and postulated that the speed of light from point A to point B is equal to the speed of light from point B to point A.

But the thing is that today there is no technical way to measure the speed of light on the principle "only there", and you can measure only on the principle "back and forth".

However, the "back and forth" principle actually measures the average speed of two speeds: the speed from A to B plus the speed from B to A divided by two.


Why postulate this in the first place?

It seems obvious that the speed of light from A to B must be equal to the speed of light from B to A.

But no, since the postulate itself was written, it is not obvious at all, and all the following is written only on the basis of the postulate.

That is, by creating the postulate he has made a straw for himself to write everything else.


A trader is obliged to assume (take on his own faith, postulate for himself) that the regularity (combination of circumstances) (pattern) revealed by him on the history is still working.

And if it appears right now, he should make a corresponding trading action.

Otherwise, there will be no trading at all.

 
Evgeny Shevtsov #:


You can't get anywhere without postulates.

...

...

A trader is obliged to assume (take on his own faith, postulate for himself) that the pattern (combination of circumstances) (pattern) identified by him on the history is still working.

And if it appears right now, he should make a corresponding trading action.

Otherwise, there will be no trading at all.

I tend to call it a hypothesis

A postulate leads to a result. And there is no result at Forex.

Therefore, everything they criticise in that article is a hypothesis .


Well, let it be a postulate and let them know what is NOT rubbish, which, following the logic, should bring profit.

It's a pity that you can't spin this miracle-MO-apparatus in your hands.
 
Ivan Butko #:
I tend to call it a hypothesis

A postulate leads to a result. And there is no result at Forex.

Therefore, everything they criticise in that article is a hypothesis .


Well, let it be a postulate and let them know what is NOT rubbish, which, following the logic, should bring profit.

It's a pity that you cannot spin this miracle-MO-apparatus inyour hands .


Well, yes, more of a hypothesis.

A postulate is a statement that is accepted without proof.

And a hypothesis is an assumption to which proof or disproof must be applied.

The proof or disproof of a trader's belief in the persistence of a pattern is the profit curve of that trader, examined over a decent period of time.

And since proof or refutation is applied, such a trader's belief is a hypothesis on his part, not a postulate.

And if the hypothesis is proven, it becomes an established fact.

So, purely legally. ))

 

The problem of numbers.


Any cause of any action of a machine (NS) is most often the result with the highest number: in regression it is an expectation when it shows a large prediction, in classification - when it shows a large probability.

And any number in the system of neurons has the greater influence on the final decision of the whole system, the greater its absolute value, whether it is positive or negative. And the task of the system is to determine the necessary strength of connection between neurons.

But it turns out that the input data already actually carry a force factor in the face of its quantitative value. If a number 0.9, which means a coordinate on a plane or in space, or the name of a colour shade (which is a priori a qualitative, not a quantitative sign), is given as input, it initially(!) already(!) influences the whole system more(!) than the others, although in fact it is a stupid number that means nothing in quantitative terms, but its quantitative characteristic influences the system a priori, deceiving(!) and misleading the NS with its values from the very beginning. After all, for the NS with its quantitative relations, numbers at the input are already a power factor, a factor of influence on decision making.

At the same time, the number 0.9, which in the initial pattern does not reflect its strength, but in fact exerting force on the NS, also negatively affects the other numbers lying in the range below this number - smaller numbers. And the weight, which will try to weaken (nullify) the input value 0.9, will also weaken even(!) more strongly(!) the other values in the lower range of this input number (which later may be more important for the performance of the system), due to its static nature, because the weights do not change in the trained NS.


.

This above-described factor is not mentioned, raised, described or solved anywhere. At least I have not seen any detailed coverage of the problem.

Problem: If the input data encodes qualitative attributes (e.g. categories, colours) as numbers, the network interprets them as quantitative, which can distort the semantics.

The NS is a mathematical model that works with numbers. If the data does not reflect the "strength" of a feature in its numerical representation, it distorts the logic of the network.

In forex, price is a relative pattern, not an absolute signal.
As an example: Closing prices [1.1000, 1.1050, 1.1025, 1.1070].
Problem: Normalisation turns them into [0.0, 0.5, 0.25, 0.7], creating the illusion of hierarchy.



In this context, it seems reasonable to move from numbers to some kind of categories, to convert numbers into a pattern that has exactly two states: [1] и [0]. If the pattern is present, it is 1, if it is not, it is 0. If it is present, the input is 1 multiplied by the weight. And the weight in the process of learning is accordingly adjusted to the "quality of the pattern", forming its objective strength. The number of input weights = the number of given patterns.

And the next layers/neurons already learn to work with them.

The patterns themselves can be formed as a relation/position of prices relative to each other: first and second, first and third, second and third. Above - 1, equal - 0, below -(1).

We can go even deeper and allocate a separate weight for each state and then if the price N is higher than its neighbour, then w1, if it is lower, then w2, if it is equal, then w3.
The number of inputs (weights) will increase threefold, but they will reflect real objective patterns of net prices.

And in the process of training, the NS will independently zero out garbage/noise if it finds it.

 
Ivan Butko #:
As an example: Closing prices [1.1000, 1.1050, 1.1025, 1.1070].
Problem: Normalisation turns them into [0.0, 0.5, 0.25, 0.7], creating the illusion of hierarchy.

A very strange normalisation..it is usually cited to be without 0 (and preferably without 1). Both are unattainable limits, with them hemorrhoids and errors.

and I think you're rehashing something old, something like Pearson's.

 
One hot encoding in hand, applied to categorical feature encoding. Or Catboost, which accounts for them independently.
 
You see the point: until you, in the process of meditation/data mining/dancing with tambourine/learning with different signs and marks/other occult practices, find some significant patterns that work, then all these transformations will give +- 10%, at the level of model error.

But it is also very difficult to find them, just as a result of enumeration.
 
Ivan Butko #:
At the same time, the number 0.9, which in the initial pattern does not reflect its strength, but in fact exerting force on the NS, also negatively affects the other numbers lying in the range below this number - smaller numbers. And the weight, which will try to weaken (nullify) the input value 0.9, will also weaken even(!) stronger(!) and other values in the lower range of this input number (which may later be more important for the performance of the system), due to its static nature, because the weights do not change in a trained NS.

This is an interesting idea. And apparently it has a place. At one time I gave up NS in favour of wooden models, confusion with normalisation is one of them.
Everything works there ironically (woodenly). Values < 0.9 go to one branch, values >= 0.9 to another. And the value 0,9 itself does not affect the further actions with examples in these 2 branches in any way. They don't need normalisation, any numbers are equivalent: 0,001 and 10000000 are just values for comparison.
The tree also perfectly handles categorical fiches. For them, the division into branches is not through < or >, but through ==. For example, for colour: all green will go through equality to one branch, and all red to another, and the rest will remain for further division into branches (other categories and numbers).