Интерполяция, аппроксимация и иже с ними (пакет alglib) - страница 14

 
Maxim Dmitrievsky :

I of dont Understand But how to work with Gram matrix now! Just, Because of this is not a new Transformed features, just ITS matrix with scalar product of old features

Well, in this case, I think you need to take the first derivative of the final scalar representation to get the vector. I mean just need to calculate the slope of the final kernel function.

I assume that there should be inbuilt MQL5 library to calculate the first derivative or slope of any function.

In this case, if the slope is positive, then it should be a BUY signal and if slope is negative, then it should be a SELL signal.

 
FxTrader562:

Well, in this case, I think you need to take the first derivative of the final scalar representation to get the vector. I mean just need to calculate the slope of the final kernel function.

I assume that there should be inbuilt MQL5 library to calculate the first derivative or slope of any function.

In this case, if the slope is positive, then it should be a BUY signal and if slope is negative, then it should be a SELL signal.

)) nono.. we need it as a new feature points for RDF fitting, same 2 or n vectors but with new points I think

I just cant imagine it :D First we need transform it with kernel and then transform it back to features with another data points

or maybe matrix determinant Gramian -this points

 
Maxim Dmitrievsky:

)) nono.. we need it as a new feature points for RDF fitting, same 2 or n vectors but with new points I think

I just cant imagine it :D First we need transform it with kernel and then transform it back to features with another data points

or maybe matrix determinant Gramian -this points

I am getting confused totally here:))

Kernel function is a classification technique to run the classification process faster.right?

Why do we need to extract feature points from kernel function back. We just need to feed the neural network the feature points obtained from spline and get the classification done using RDF and kernel functions. right?

In my understanding, feature transformation should be done by spline function. right?

Where is the confusion? Am I getting confused or you are:))

 
FxTrader562:

I am getting confused totally here:))

Kernel function is a classification technique to run the classification process faster.right?

Why do we need to extract feature points from kernel function back. We just need to feed the neural network the feature points obtained from spline and get the classification done using RDF and kernel functions. right?

In my understanding, feature transformation should be done by spline function. right?

Where is the confusion? Am I getting confused or you are:))

No, we are using ktricks for projecting features to another dimension spaces, and we need new coordinates of this projections as a new data points, then we learn RDF

Its a tensor and vector algebra, but I'm noob here, but I learning fast )

If you know someone who knows vector algebra - pls invite

or lets add the topic on en forum version
 
Maxim Dmitrievsky:

No, we are using ktricks for projecting features to another dimension spaces, and we need new coordinates of this projections as a new data points, then we learn RDF

Its a tensor and vector algebra, but I'm noob here, but I learning fast )

If you know someone who knows vector algebra - pls invite

I am getting closer in understanding what you are looking for...basically, the co-ordinates of the higher dimension for our input vector of our lower dimension.right?

I will look into vector algebra soon. I think everything we can easily get from google and youtube. I will post some links if I will find. 

I studied vector algebra long long back in my college and hence, looking quickly through it.

 
FxTrader562:

I am getting closer in understanding what you are looking for...basically, the co-ordinates of the higher dimension for our input vector of our lower dimension.right?

I will look into vector algebra soon. I think everything we can easily get from google and youtube. I will post some links if I will find. 

I studied vector algebra long long back in my college and hence, looking quickly through it.

yes, we need like in this video


for example we have 2-d feature space and we can't linearly separate it, then we adding 3-d feature and now we can separate it by hyperplane

but kernel allow us to make projection of points without adding 3-d feature, so we can separate it same way if we have 2 features instead 3

but.. how we can get transformed 2-d features that linearly separated in another dimention. We need a 2-d projection of new dimention, i.e. new points from another vector space

I thinl its a magic, but anyway )

 
Maxim Dmitrievsky:

yes, we need like in this video


for example we have 2-d feature space and we can't linearly separate it, then we adding 3-d feature and now we can separate it by hyperplane

but kernel allow us to make projection of points without adding 3-d feature, so we can separate it same way if we have 2 features instead 3

but.. how we can get transformed 2-d features that linearly separated in another dimention. We need a 2-d projection of new dimention, i.e. new points from another vector space

Well, as I said I studied vector algebra long back and hence, I have the basic understanding already. But in this case, I find it a little difficult.

It's all about dot product and cross product.

Dot product is magnitude which is determined by A.BCos(angle between A and B). This is called inner product

Cross product is the vector after multiplication of A and B vector and their magnitude is A.B.Sin(Angle between A and B). This is called outer product. So I understood this line of code and I think you will also understand this:

P = cvxopt.matrix(np.outer(y,y) * K)

This is just a cross product I guess.

This is a related video to kernel mapping:

https://www.youtube.com/watch?v=7_T9AdeWY3k

 
FxTrader562:

Well, as I said I studied vector algebra long back and hence, I have the basic understanding already. But in this case, I find it a little difficult.

It's all about dot product and cross product.

Dot product is magnitude which is determined by A.BCos(angle between A and B). This is called inner product

Cross product is the vector after multiplication of A and B vector and their magnitude is A.B.Sin(Angle between A and B). This is called outer product. So I understood this line of code and I think you will also understand this:

This is just a cross product I guess.

This is a related video to kernel mapping:

https://www.youtube.com/watch?v=7_T9AdeWY3k

Yes, its from here http://crsouza.com/2010/03/17/kernel-functions-for-machine-learning-applications/#log

but I cant separate kernels and SVM from source code

 
Maxim Dmitrievsky:

Yes, its from here http://crsouza.com/2010/03/17/kernel-functions-for-machine-learning-applications/#log

but I cant separate kernels and SVM from source code

As far as I can understand the co-ordinate in higher dimensional space has to be the kernel function value along with the 2 input vector. It means we have 2 input vectors and we need the 3rd vector and it gets added to the 3rd co-ordinate.

For example, if you feed 2 vectors x and y and map it to 3 d space get the kernel value K(x,y),

Then, the co-ordinate of the final vector in 3 D space has to be (x,y,k(x,y))

Next, if you map it to 4D space and get kernel value k1(x,y,k(x,y)),

Then, the co-ordinate in 4D space should be (x,y,k(x,y),k1(x,y,k(x,y))) and so on....

Does it make sense or create any link with your existing source code?

OR another way is to get the angle of the tensor with reference to the mapping co-ordinate, then taking the cosine of that angle and multiplying it by the magnitude of the tensor.
 

Все, я нашел нормального пацана, хорошо объясняет, сразу все вспомнил


Причина обращения: