Machine learning in trading: theory, models, practice and algo-trading - page 3150

 
mytarmailS #:

If you're serving the last five candles, sure.

And if you think about it and try to cover everything that can influence the price, you will realise that there are billions of signs, and you have to take into account invariance, which is billions times billions.

That's why I wrote THIS.

so all the ready-made models are out of the question.

I agree with Max in general. Generating rules or conditions within a price range is limited to comparisons, logic and calendar binding. So the general is definitely there))))

 
mytarmailS #:

If you're serving the last five candles, sure.

And if you think about it and try to cover everything that can influence the price, you will realise that there are billions of signs, and you have to take into account invariance, which is billions times billions.

That's why I wrote THIS.

so all the ready-made models are out of the question.

Nonsense.

The point of maths is to replace the diversity of the world around us with a minimum of formulas that would calculate that diversity.

You have described or attempted to describe the falling of everything and everyone to Earth, and Newton derived a formula for the Earth's gravity of any object falling to Earth.

I have in RF trees from some derivation of non-stationary cotir in 80% of cases or more correctly predict the future. There are no more than 150 such trees when moving the window to 15,000 bar (I have not tried more) the classification error does not change. You can take a risk and use a minimum of 70 trees: the error is a few per cent larger. That's all your billions of billions of billions.

 
СанСаныч Фоменко #:

I have RF trees from some derivation of a non-stationary quotient 80% of the time or more correctly predicting the future.



And I earn 1 million per hour and I have 50 cm, and in general I am the president of the planet.

But I am not going to show and prove anything to anyone, just like other blabbermouths here.....

 
mytarmailS #:

If you're serving the last five candles, sure.

And if you think about it and try to cover everything that can influence the price, you will realise that there are billions of signs, and you have to take into account invariance, which is billions times billions.

That's why I wrote THIS.

so all the ready-made models are out of the question.

It's time to ask for a quota for Quantum Computing ) They seem to solve matrix operations quickly

 
Maxim Dmitrievsky #:

It's time to ask for a quota for Quantum Computing ) They seem to solve matrix operations quickly.

I'm looking for rules through algorithms of associative rules, matrix operations won't help there((.

I need terabytes of RAM, and maybe a CPU.

 
mytarmailS reduce the dimensionality. (essentially adapting a well-known compression algorithm)

Started the algorithm for searching rules (the most efficient one there is), it didn't understand, it hung)).

I had to reduce the dimensionality by 100 times.


That's when it started to find something...


This is what a pattern or a rule looks like in my language...

Just one rule, gentlemen.

If you think that something like this can be done by Forest or Boost, I will disappoint you, they haven't learnt to feed 10 billion features into table models yet....

Just one fucking rule.

You need supercomputers for this kind of thing.

What if you need multiple training? For example, to optimise some metaparameters. Then a supernetwork of supercomputers would be needed)

 
Aleksey Nikolayev #:

What if multiple training is required? For example, to optimise some metaparameters. Then we would need a supernetwork of supercomputers).

First of all, it is not learning, but searching for rules.
If the rules are found, what is the point of looking for the same rules in the same data again?

Secondly, you can always do what you did before but hope for a different result
 
mytarmailS #:
First of all, this is not learning, but searching for rules.
If the rules are found, what is the point of looking for the same rules in the same data again?

Secondly, you can always do what you did before but hope for a different result

There are always some metaparameters that affect the result. The size of the history window, for example, and anything else. The worst thing you can do with metaparameters is to ignore their existence, just taking their value from the ceiling.

I remember fxsaber's statement that sometimes it is useful to treat any constant in the TS algorithm as an optimisation parameter.

 
Aleksey Nikolayev #:

There are always some metaparameters that affect the result. The size of the history window, for example, and anything else. The worst thing you can do with metaparameters is to ignore their existence, just taking their value from the ceiling.

I remember fxsaber's statement that sometimes it is useful to treat any constant in the TS algorithm as an optimisation parameter.

the size of the history window is just a big limitation for classical MOs with tabular data.

ACs (asoc. rules) do not suffer from such a disease, they perfectly digest unstructured data, moreover, each observation can be of arbitrary size.

And the very "window of vision" (history window) can be limited only by deductive power. well or common sense.


So your example with the size of the history window is just a vote for AC and against MO.



Give me some more arguments, I'm interested, maybe I really didn't take into account something. And another question, how much are you in the topic of AC?


==========================================================================

Let's create a small mock-up (model) of the data with five observations.

[[1]]
 [1] "A"    "B"    "."    "."    "."    "."    "C"    "."    "."    "."    "."    "."    "SELL"

[[2]]
 [1] "."    "."    "."    "."    "."    "."    "."    "."    "."    "."    "A"    "."    "."    "B"    "."    "."    "."    "C"    "."    "."   
[21] "."    "."    "."    "."    "."    "."    "."    "."    "."    "."    "."    "SELL"

[[3]]
 [1] "."    "."    "."    "."    "."    "."    "."    "."    "."    "A"    "."    "."    "."    "."    "."    "."    "."    "."    "."    "."   
[21] "B"    "."    "."    "."    "."    "."    "."    "."    "."    "."    "."    "."    "."    "."    "."    "."    "."    "."    "."    "."   
[41] "."    "SELL"

[[4]]
 [1] "."    "."    "."    "."    "A"    "."    "."    "."    "B"    "."    "."    "."    "."    "."    "C"    "SELL"

[[5]]
 [1] "."    "."    "."    "."    "."    "."    "."    "."    "."    "."    "."    "."    "."    "."    "."    "."    "."    "."    "."    "."   
[21] "."    "."    "."    "."    "A"    "."    "."    "."    "."    "."    "."    "."    "."    "."    "."    "."    "B"    "."    "."    "."   
[41] "C"    "."    "."    "."    "."    "SELL"

Observations - let one observation be one day of m5 quotes.

Let's denote"." as some noise in observations (some events we are not interested in).

"A" " B " "C " ---> " SELL" This is the sequence of events that resulted in the "SELL" target.


All you feed is the last 5 candles in the MO and the target.

Like this:

all so that the data is in a neat tabular form.

How will AMO find a pattern from the example?



The speaker takes all the observations as input, and the observations can be of different sizes (as in the example).

It throws out the noise by selecting the rules and gives a pattern in the form of a rule = "A" "B" "C" leads to "CELL".

It turns unstructured rubbish into patterns.

 
mytarmailS #:

the size of the history window is just a big limitation for classic MOs with tabular data

AS (asoc. rules) do not suffer from such a disease, they perfectly digest unstructured data, moreover, each observation can be of arbitrary size.

And the "window of vision" (history window) itself can be limited only by computing power. well or common sense.


So your example with the size of the history window is just a vote for AC and against MO.


Give me some more arguments, I'm curious if I'm really missing something.

And another question, how much are you into the AU?

I didn't dive into the rules. I have already written that I came to the application of formal grammars from the other side - I looked at the price as constructed by stochastic grammar. I gave up the approach precisely because of its cumbersomeness, which is bad first of all because it provokes overtraining.

Now I avoid heavy models. The main informal rule for me is that the heaviness of the model should correspond to the heaviness of the information in the sample.

Your model is heavy enough for a full-fledged price model, but the actual sample of prices we have (even if we add other information) is not enough for such a model.

Naturally, IMHO

Reason: