Machine learning in trading: theory, models, practice and algo-trading - page 1966

 
Maxim Dmitrievsky:

Switch to python, I'll give you examples, you'll use

I see no point in discussing it on the forum, because RL is not an entry-level topic.

As soon as I finish the book I'll ask)))) It seems to remember something before making a deal, time in relation to day of week of month, lag increments. If I look at the strategy on trade history, I don't know what kind of strategy it uses.

 
Maxim Dmitrievsky:

Switch to python, I'll give you examples, you'll use

I do not see the point in discussing on the forum, because RL topic is far from the entry level


Can you send me an email?

eugen420@gmail.com

 
Evgeni Gavrilovi:


Can you send it to me in the mail?

Only to him, the rest for the money

 
Maxim Dmitrievsky:

Switch to python, I'll give you examples, you'll use

I don't see the point in discussing it on the forum, because RL is far from the entry level

I will try...

 
mytarmailS:

I'll give it a try.

Check out the intro videos on youtube. It's the same as R, only with its own features. You won't have any problems with conversion. But you can take kotier from mt5 and open trades.
 
Maxim Dmitrievsky:
Look at the introductory videos on YouTube.

Yes, that's what I do))

 
mytarmailS:

Yes, that's what I do))

Understanding would come quicker if the manual was written without errors:

The cluster layer is a Kohonen vector quantization (LVQ) neural network. The cluster layer groups the dendrite results according to the standard LVQ algorithm. Recall that LQV implements teacherless online learning .

First of all, the letters in the acronyms are mixed up (correct, LVQ), second, it is a method of learning WITH A TUTOR

and the method without a teacher is called VQ (vector quantizatinon), then most likely it is in the neuron, not LVQ.


It is important to note that the LQV subnet kept the memory of the previous input signals as the memory of the last outputs of the LQV neurons. Therefore, more information was available to the neural network than was directly fed to its input.

I don't understand this yet. Most likely these connections of the subnetwork outputs that go back to the neurons are meant. They are just storing information about past actions.

That is, the memory stores the LQV subnetwork.


 
Maxim Dmitrievsky:

Understanding would come faster if the manual were written without errors:

The cluster layer is a Kohonen vector quantization (LVQ) neural network. The cluster layer groups the dendrite results according to the standard LVQ algorithm. Recall that LQV implements teacherless online learning .

First of all, the letters in the acronyms are mixed up (correct, LVQ), second, it is a method of learning WITH A TUTOR

and the method without a teacher is called VQ (vector quantizatinon), then most likely it is in the neuron, not LVQ.

dunno... I've read it 4 times and still don't get it, maybe the teacher is a reinforcement there?

+ there's also voting coming from layers.

Maxim Dmitrievsky:

It is important to note that the LQV subnet kept the memory of the previous inputs as a memory of the last outputs of the LQV neurons. Therefore, more information was available to the neural network than was directly fed to its input.

I don't understand this yet. Most likely these connections of the subnetwork outputs that go back to the neurons are meant. They are just storing information about past actions.

That is, memory saves the LQV subnetwork.

Well, yes, the memory in the LQV in the form of the last outputs of the LQV neurons, but it is, as I understand it, the memory is only one step back...

What about this fantastic analogy with the kettle and the coffee? That's the whole grail.


He didn't tell you anything?

I wonder where they teach this stuff. It's like cybernetics, robotics, and...

DataSynthists are just physicists at the tech institute.)


==============================================================

there's also dynamic LVQ - - dlvq

https://www.rdocumentation.org/packages/RSNNS/versions/0.4-2/topics/dlvq

Details

dlvq : Input data must be normalized to use DLVQ.

DLVQ learning: an average vector (prototype) is computed for each class and stored in a (newly created) hidden module. The network is then used to classify each pattern using the nearest prototype. If a pattern is misclassified as class y instead of class x, the prototype class y is moved away from the pattern and the prototype class x is moved toward the pattern.This procedure is repeated iteratively until the classification changes no longer occur. Then new prototypes are introduced into the network for each class as new hidden units and are initialized by the average vector of misclassified templates in that class.

Network architecture: the network has only one hidden layer containing one unit for each prototype. The prototypes/hidden units are also called codebook vectors. Since SNNS generates units automatically and does not require a prior specification of their number, the procedure in SNNS is called dynamic LVQ.

The default initialize, learn and update functions are the only ones suitable for this type of network. The three parameters of the learn function define two learning rates (for correctly / incorrectly classified cases) and the number of cycles for which the network is trained before calculating the average vectors.

References

Kohonen, T. (1988), Self-organization and associative memory, Vol. 8, Springer-Verlag.



=========================================================


Man, I read about this LVQ , it is practically the same Kohonen (SOM) only with a teacher

https://machinelearningmastery.com/learning-vector-quantization-for-machine-learning/#:~:text=The%20Learning%20Vector%20Quantization%20algorithm,those%20instances%20should%20look%20like.

 

Who solved the issue of quantization of the numeric range with reference to the target? In my case it is non-uniform "quantization" - the process of auto-tuning to the target with the limitation of minimum digits in the range or the window itself in numerical expression - I have not decided yet.

Experiments with CatBoost show that quantization strongly (in some cases up to 15% acurasi) affects the result.

 
Aleksey Vyazmikin:

Who solved the issue of quantization of the numeric range with reference to the target? In my case it is non-uniform "quantization" - the process of auto-tuning to the target with the limitation of minimum digits in the range or the window itself in numerical expression - I have not decided yet.

Experiments with CatBoost show that quantization strongly (in some cases up to 15% acurasi) affects the result.

https://cran.r-project.org/web/packages/glmdisc/vignettes/glmdisc.html

package for discretization (quantization) with respect to the target

Reason: