Interpolation, approximation and the like (alglib package) - page 15

 
Maxim Dmitrievsky :

Everything, I found a normal boy, well explained, immediately remembered everything


Great !!!

So for now your problem is solved regarding the transformation and mapping to 2 D space?

Though I did not understand the russian, but I understood the formula somewhat. It's just a play of cosine and multiplication and division of the magnitudes of the 2 vectors :))

Have you got the code in MQL5 transformed?

 
I have read the English correspondence. Barely. It turns out that Wapnick, whose ideas came to mind immediately after reading TC #1, firstly, is alive, and secondly, he has developed an idea of empirical (average) risk minimization for machine learning neural networks to a method of support vectors, a non-linear version of which is discussed in this correspondence. VIKI:

Reference vector method

Kernels[edit | edit code]

An algorithm for constructing an optimal separating hyperplane proposed in 1963 by Vladimir Vapnik and Alexei Chervonenkis is a linear classification algorithm. However, in 1992, Bernhard Boser, Isabelle Guillon, and Vapnik proposed a way to create a nonlinear classifier based on the transition from scalar products to arbitrary kernels, the so-called kernel trick (first proposed by M.Aizerman, Braverman, and Rozonoer for the potential function method), which allows one to construct nonlinear separators. The resulting algorithm is extremely similar to the linear classification algorithm, the only difference being that each scalar product in the above formulas is replaced by a non-linear kernel function (a scalar product in a space of larger dimension). In this space an optimal separating hyperplane can already exist.


It seems that Maxim Dmitrievsky sets goals very close to those pursued by Vapnik. We should look to Wapnick for a basis for the choice (and selection) of approximating functions.

 
Vladimir:
I have read the English correspondence. Barely. It turns out that Wapnick, whose ideas came to mind immediately after reading TC #1, firstly, is alive, and secondly, he has developed an idea of empirical (average) risk minimization for machine learning neural networks to a method of support vectors, a non-linear version of which is discussed in this correspondence. VIKI:

Reference vector method

Kernels[edit | edit code]

An algorithm for constructing an optimal separating hyperplane proposed in 1963 by Vladimir Vapnik and Alexei Chervonenkis is a linear classification algorithm. However, in 1992, Bernhard Boser, Isabelle Guillon, and Vapnik proposed a way to create a nonlinear classifier based on the transition from scalar products to arbitrary kernels, the so-called kernel trick (first proposed by M.Aizerman, Braverman, and Rozonoer for the potential function method), which makes it possible to construct nonlinear separators. The resulting algorithm is extremely similar to the linear classification algorithm, the only difference being that each scalar product in the above formulas is replaced by a non-linear kernel function (a scalar product in a space of larger dimension). In this space an optimal separating hyperplane can already exist.


Maxim Dmitrievsky seems to be setting goals very close to those pursued by Vapnik. We should look to Wapnick for a basis for selecting (and selecting) uproximating functions.

Yes, thanks. Yandex conference with a Russian scientist in English, a shame


 
Maxim Dmitrievsky:

Yes, thank you. Yandex conference with Russian scientist in English, a shame


He moved to live in the US in 1990 when he was 54 years old. No wonder he pronounces English words with Russian sounds. I don't see anything embarrassing. He speaks very clearly, like most people for whom English is not their first language. Think of the pronunciation of the Greek Demis Roussos, remarkably clear and separate for songs.
 
Vladimir:
He came to live in the US in 1990 when he was 54 years old. No wonder he pronounces English words with Russian sounds. I don't see anything embarrassing. He speaks very clearly, like most people for whom English is not their first language. Think of the pronunciation of the Greek Demis Roussos, amazingly clear and separate for songs.

Yandex is a Russian company. It's a shame that they are shutting down their academics and those who want to learn from them in Russian.

They even have descriptions of their own machine learning algorithms all in English.
 

Hi Maxim,

So you have progressed further with the code which you were stuck previously?

        P = cvxopt.matrix (np.outer (y, y) * K)
        q = cvxopt.matrix (np.ones (n_samples) * - 1 )
        A = cvxopt.matrix (y, ( 1 , n_samples))
        b = cvxopt.matrix ( 0.0 )
 
FxTrader562:

Hi Maxim,

So you have progressed further with the code which you were stuck previously?

We cant use kernels this way, because this algorithm works only with inner products, and I dont know how to map features back into vectors

 
Maxim Dmitrievsky :

We cant use kernels this way, because this algorithm works only with inner products, and I do not know how to map features back into vectors

Well, that is exactly what is the main function of the kernel function.

But we can not map features in and out using kernel function. That is not the job of the kernel function. The kernel function just makes the classification process faster and easier by mapping the price points to higher dimensions.

Most importantly even if you will map a feature for example a candle close price to a 3D space. The candle close value is not going to change in 3D. It is very normal even if you map it back to 2D.

That is when spline come to the picture for price interpolation and I guess you already know spline. I mean we feed data in terms of spline and get the classification done by kernels.

Now, if you are looking for mapping the features, then, kindly specify exactly what are you trying to map in terms of the price. I mean are you trying to map candle close, candle open prices, and so on.

 

Hi Maxim,

I'm sorry for bothering you again. I just thought to quickly check with you if you are still working on your RDF project.

I am trying to improve your current RDF algo as well as I am trying to integrate python reinforcement algo with MQL. Hence, I thought to just collaborate with other programmers who are interested in it.

You can update me if you are still working on it and how far have you progressed so far.

I have few interesting ideas to implement with the RDF which I can share with you along with source code of MQL5 and I am yet to learn the bridge between Mt5 and python.

Thank you ...

 
FxTrader562:

Hi Maxim,

I'm sorry for bothering you again. I just thought to quickly check with you if you are still working on your RDF project.

I am trying to improve your current RDF algo as well as I am trying to integrate python reinforcement algo with MQL. Hence, I thought to just collaborate with other programmers who are interested in it.

You can update me if you are still working on it and how far have you progressed so far.

I have few interesting ideas to implement with the RDF which I can share with you along with source code of MQL5 and I am yet to learn the bridge between Mt5 and python.

Thank you ...

Hi, yes, I try various ideas that are partly discussed here https://www.mql5.com/ru/forum/86386/page1056

Машинное обучение в трейдинге: теория и практика (торговля и не только)
Машинное обучение в трейдинге: теория и практика (торговля и не только)
  • 2018.09.14
  • www.mql5.com
Добрый день всем, Знаю, что есть на форуме энтузиасты machine learning и статистики...
Reason: