Machine learning in trading: theory, models, practice and algo-trading - page 2239

 
Fast235:

Slim keyboard+touch, either one will be flawed

This one's definitely flawed.

 
I downloaded files (sampling) from that contest that you can not mention, and they are all in .parguet format, I found a solution, but need python, maybe someone converts them into csv?
Convert Parquet to CSV
Convert Parquet to CSV
  • 2018.07.06
  • Joe Joe 7,987 10 10 gold badges 50 50 silver badges 110 110 bronze badges
  • stackoverflow.com
How to convert Parquet to CSV from a local file system (e.g. python, some library etc.) but WITHOUT Spark? (trying to find as simple and minimalistic solution as possible because need to automate everything and not much...
 
elibrarius:

The market and participants and their algorithms change over time. It is strange that you expect a stable system trained once. Re-train once a week or every day (on trees it's fast).

This is a philosophical question :)

You have to understand how fast the market changes, the frequency of retraining depends on that, how do you measure that?

I think the market is made up of many different predispositions, their set is limited, and I only teach the model to identify that predisposition and make money from it.

 
Vladimir Perervenko:

Why don't you like the out-of-the-box stuff ? You actually need only the part responsible for communication between MKL and Python (ZeroMQ).

Good luck

I did not know about it)) Thanks!

 
Maxim Dmitrievsky:

Only I do not quite understand why to complicate the task with pictures, when you can through 1d convolution? :) the picture does not add any information to the row

Yes, you're right, if the feature vector is converted to a matrix and fed to convolution, then not much will change (I've already checked:)) In my case, the idea is to make maximum use of the convolutional network properties to find and use local templates. These patterns are invariant with respect to transposition, that is, multilayer convolution can find the same pattern, in different places in the image. The same architecture with intermediate aggressive feature map reduction allows to form a hierarchy between templates on different convolutional layers. So, I'm trying to find such a graphical interpretation of the quote that will allow convolution to find these templates.

 
welimorn:

Yes, you are right, if the feature vector is converted to a matrix and fed to the convolutional network, then little will change (I have already checked:)) In my case, the idea is to make maximum use of the convolutional network property to find and use local templates. These patterns are invariant with respect to transposition, that is, multilayer convolution can find the same pattern, in different places in the image. The same architecture with intermediate aggressive feature map reduction allows you to form a hierarchy between templates on different convolutional layers. So, I'm trying to find a graphical interpretation of the quote that will allow the convolution to find these templates.

And how do you convert a vector to a matrix?

 

Yes, you are right, if the feature vector is converted to a matrix and fed to the convolutional network, then little will change (I have already checked:)) In my case, the idea is to make maximum use of the convolutional network property to find and use local templates. These patterns are invariant with respect to transposition, that is, multilayer convolution can find the same pattern, in different places in the image. In the same way, the architecture with intermediate aggressive feature map reduction allows you to form a hierarchy between templates on different convolutional layers. So, I'm trying to find a graphical interpretation of the quote that allows the convolution to find these templates.

By the way. Is it right for us to look for patterns in different places on the chart?
I think not.
For example, we found some pattern at 20 points after which we should buy. And if this pattern was not on the 0-m bar but 20-50-200 bars ago and it is too late to buy, we should sell. The flip network will find it and buy it. It will answer the question if the pattern was at the section of the chart shown to it. And we need to look for the pattern only on the right part of the chart, i.e. at the 0th bar.

So it turns out that convolutional nets are not suitable for working with the quotes. The appearance of the pattern in any place other than the 0-th bar will only impede the profitable work.
If the chart has 100 points, and the pattern has 20. Then the convolution net will give a signal that there is a pattern 80 times!!!

I was going to do them, but I just changed my mind.
 

Yes, you are right, if the feature vector is converted to a matrix and fed to the convolutional network, then little will change (I have already checked:)) In my case, the idea is to make maximum use of the property of the convolutional network to find and use local templates. These patterns are invariant with respect to transposition, that is, multilayer convolution can find the same pattern, in different places in the image. The same architecture with intermediate aggressive feature map reduction allows to form hierarchy between templates on different convolutional layers. So, I'm trying to find such a graphical interpretation of the quote, which will allow convolution to find these templates.

You could try recurrence plots. I did it, it didn't work, and it's slow again.

 
Maxim Dmitrievsky:

you can try recurrence plots. I did, but it didn't work, and it's slow, again.

or series decomposition, PCA for example with inverse transformation....

you can decompose the series into atoms and put them back together.


here's the first two components in the window 100

here are the second and third components

here are 3 and 4 components

here are 30 and 31 components

that way you can decompose up to 100, it's a cool thing...

all this on new data, no lag etc...


......

......

.....

Huh... Most people didn't even understand what I was talking about at all )))) probably)

 
mytarmailS:

or series decomposition, PCA for example with inverse transformation....

you can decompose the series into atoms and put them back together


here are the first two components in the window 100

here are the second and third components

here are 3 and 4 components

here are 30 and 31 components

that way you can decompose up to 100, it's a cool thing...

all this on new data, no lag etc...


......

......

.....

Huh... Most people don't even know what I'm talking about at all )))) ))

That's for sure. Most just didn't understand what you were talking about at all. Well, that's the way it's supposed to be.

You have come by roundabout ways to the construction, which is known for a long time, widely applied in practice, well-proven, called "nonius tracking system". And although not yet to the fullest extent, but you have caught the essence.

Reason: