Machine learning in trading: theory, models, practice and algo-trading - page 1967

 
mytarmailS:

I don't know. I read 4 times and did not understand it, maybe the teacher - a reinforcement?

+ there is also voting from the layers.

Well, yes, the memory in LQV in the form of the last outputs of LQV neurons , but it is, as I understand memory is only one step back...

What about this fantastic analogy with the kettle and the coffee? That's the whole grail.


He didn't tell you anything?

I wonder where they teach this stuff. It's like cybernetics, robotics, and...

DataSynthists are just physicists at the tech institute.)


==============================================================

there's also dynamic LVQ - - dlvq

https://www.rdocumentation.org/packages/RSNNS/versions/0.4-2/topics/dlvq

Details

dlvq : Input data must be normalized to use DLVQ.

DLVQ learning: an average vector (prototype) is computed for each class and stored in a (newly created) hidden module. The network is then used to classify each pattern using the nearest prototype. If a pattern is misclassified as class y instead of class x, the prototype class y is moved away from the pattern and the prototype class x is moved toward the pattern.This procedure is repeated iteratively until the classification changes no longer occur. Then new prototypes are introduced into the network for each class as new hidden units and are initialized by the average vector of misclassified templates in that class.

Network architecture: the network has only one hidden layer containing one unit for each prototype. The prototypes/hidden units are also called codebook vectors. Since SNNS generates units automatically and does not require a prior specification of their number, the procedure in SNNS is called dynamic LVQ.

The default initialize, learn and update functions are the only ones suitable for this type of network. The three parameters of the learn function define two learning rates (for correctly / incorrectly classified cases) and the number of cycles during which the network is trained before calculating the average vectors.

References

Kohonen, T. (1988), Self-organization and associative memory, Vol. 8, Springer-Verlag.



=========================================================


Man, I read about this LVQ , it is practically the same Kohonen (SOM) only with a teacher

https://machinelearningmastery.com/learning-vector-quantization-for-machine-learning/#:~:text=The%20Learning%20Vector%20Quantization%20algorithm,those%20instances%20should%20look%20like.

He doesn't have an LVQ, he has a VQ

does not respond

Probably because of the sparse connections, somehow not all neurons are not always active, so the memory may take longer to retain... In addition there is associative memory (which sets of features belong to which cluster). The code book is so called.

Well this is from control theory, they probably teach it at uni. The original article is dated 2015 from some chinese. I don't have access to it. This one is probably already a rehash.

 
Maxim Dmitrievsky:

It's not LVQ, it's VQ

doesn't respond

Probably because of the sparse connections, somehow not all neurons are not always active, so the memory may take longer to store... In addition, there is associative memory (which sets of features belong to which cluster). The code book is so called.

Well it's all from control theory, they probably teach it at uni. The original article is dated 2015 from some chinese. I don't have access to it. Most likely this one is a rework.

It turns out that quantization of vectors goes first on the bare data, and then taking into account the result, and the result is more accurate. At least a filter for negative results appears. Sets of features for different clusters, it's like dividing a row into different stable segments.

 
Valeriy Yastremskiy:

It turns out that the quantization of vectors goes first on the bare data, and then taking into account the result, and the result is more accurate. At least the filter on the negative result appears. Sets of features to different clusters, it's like dividing a row into different stable sections.

It's hard to imagine where the long memory comes from. For example, at the previous iteration, the last neuron produced zero, add it to the input vector of the first neuron at the next iteration. Iteration. It is + 1 dimension, i.e. we put features in a new space, got a more complex conditional state depending on the previous actions. The first neuron worked and sent a unit vector to the last one. The latter returned 0 or 1 to the first one. Suppose there are only 2 clusters. Where does the memory go deeper than 1 step back?

Suppose there is a 3rd neuron, which takes another +1 value. Even more complex conditional state. And so in ascending order the memory is stored... It's hard to imagine :)

 
Maxim Dmitrievsky:

......... memory stored... hard to imagine :)

even harder for me )

====


associative network

https://www.rdocumentation.org/packages/RSNNS/versions/0.4-2/topics/assoz


This is the same as clustering, isn't it? And the associative image is a prototype of a cluster

 
mytarmailS:

It's even harder for me )

In layer 2 you can also add memory in the form of recurrence links, but it works without them. So the memory is still in layer 1.

 
mytarmailS:

it's even harder for me )

====


associative network

https://www.rdocumentation.org/packages/RSNNS/versions/0.4-2/topics/assoz


It's the same clustering, isn't it? And the associative image is a prototype cluster

well, yes, but it has no memory of the agent's previous actions, that's different

I'll go read the binder again, then I'll mess with the code.

 
Maxim Dmitrievsky:

Well, yes, but there is no memory of the agent's previous actions, this is different

I'll go read the binder again, then I'll mess with the code

let's see, let's think about it )

an agent action is an image, a pattern (cluster)

the sequence of actions (clusters) is memory


an agent action or anything can be represented as a sequence of clusters

but a pattern like "pour coffee" but "coffee must already be brewed"


can be represented as a pattern of transitions


 
mytarmailS:

look, let's think about it )

an agent's action is an image, a pattern (cluster)

a sequence of actions (clusters) is memory

an agent action or whatever can be represented as a sequence of clusters

Not exactly so. Agent's previous action + environmental state (set of features) is a pattern (conditional state). Both the previous action and fiches are shoved into 1 vector.

But in neurons there is no explicit information about the sequence of patterns passed, only through saving output signals. And there only 1 (current) pattern was processed. That is, the more complex conditional states are encoded by a group of neurons somehow.

 

Maxim Dmitrievsky:

Both previous action and fiches are shoved into 1 vector.

But in neurons there is no explicit information about the sequence, only through saving output signals. And there only 1 pattern was processed.

Well, this can also be reduced to 1 vector, by the same umap. I compressed 2k chips that way.)

 
mytarmailS:

Well, this can be reduced to a single vector, the same umap. I compressed 2k chips that way )

that's what this layer does.

Reason: