Discussion of article "MQL5 Wizard techniques you should know (Part 02): Kohonen Maps"

 

New article MQL5 Wizard techniques you should know (Part 02): Kohonen Maps has been published:

Todays trader is a philomath who is almost always (either consciously or not...) looking up new ideas, trying them out, choosing to modify them or discard them; an exploratory process that should cost a fair amount of diligence. This clearly places a premium on the trader's time and the need to avoid mistakes. These series of articles will proposition that the MQL5 Wizard should be a mainstay for traders. Why? Because not only does the trader save time by assembling his new ideas with the MQL5 Wizard, and greatly reduce mistakes from duplicate coding; he is ultimately set-up to channel his energy on the few critical areas of his trading philosophy.

A Common misconception with these maps is that the functor data should be an image or 2 dimensional. Images such as the one below is all often shared as being representative of what Kohonen Maps are.

typical_image

While not wrong I want highlight that the functor can and perhaps should (for traders) have a single dimension. So rather than reducing our high dimensional data to a 2D map we will map it onto a single line. Kohonen maps by definition are meant to reduce dimensionality so I want us to take this to the next level for this article. The kohonen map is different from regular neural networks both in number of layers and the underlying algorithm.

It is a single-layer (usually linear 2D grid as afore mentioned) set of neurons, instead of multiple layers. All the neurons on this layer which we are referring to as the functor connect to the feed, but not to themselves meaning the neurons are not influenced by each other’s weights directly, and only update with respect to the feed data. The functor data layer is often a “map” that organizes itself at each training iteration depending on the feed data. As such, after training, each neuron has weight adjusted dimension in the functor layer and this allows one to calculate the Euclidean distance between any 2 such neurons.

Author: Stephen Njuki

Reason: