Discussion of article "Using Self-Organizing Feature Maps (Kohonen Maps) in MetaTrader 5" - page 6

 
Evgeniy Scherbina:
One always sees what one wants to see.

that's exactly what you confirmed with the post above - I don't feel like arguing at all about the correct translation of " Kohonen's self-organising maps" - whether there was any room in that translation:

Evgeniy Scherbina:

I always look at the root, I knew that nobody would call it a neural network if Kohonen maps couldn't predict.

Just as there is absolutely no interest in discussing "quotes from S. Osovsky", as practice shows - reprints of works from English resources prevail in runet, I'm not sure that Osovsky wrote his own work, and I discuss with forum members, not with the writer?

in the link I showed my searches on this topic in runet, on the authoritative, in my opinion, site BaseGroup Labs also there is no confirmation.....

.... ok, I'm done - I don't want to repeat myself, just predict )))).

 
---:

attached. list of changes:

1. small change in function cIntBMP::Show(int aX, int aY, string aBMPFileName, string aObjectName, bool aFromImages=true)

2. added to the main script

Changes in CSOM class

1. Added CSOM::HideChart function - it dims the chart, grid, etc. under the background colour
2. Added parameters m_chart, m_wnd, m_x0, m_y0 - indicating on which chart and which window to display maps.
+ prefix of object names m_sID. The prefix is automatically taken by file name, otherwise it is assigned "SOM"
3. Maps are written to the folder named m_sID
4. The names of bmp files are given by the name of the training pattern column .
4. Changed CSOM::ShowBMP function - maps are not copied to the Images folder, but remain in Files (otherwise it is very time-consuming)
5. Instead of CSOM::NetDeinit function - there is now CSOM::HideBMP function
7. CSOM::ReadCSVData function is reconfigured to read the file so that the first column is the names column
6. Added flag to CSOM::Train function to show intermediate maps CSOM::Train( bool bShowProgress)
8. In CSOM::Train function, intermediate data is displayed every 2 seconds instead of iterations,
and also progress notification is moved from the log to Comment
9. Some variable names are shortened and functions are categorised.

Bmp rendering slows down the process very much. So it is better not to use it unnecessarily.

In the example, the maps are based on the Expert Advisor optimisation data.
 

Kohonen maps are suitable for classifying large amounts of different data. For example, 100 different animals. In this case, you will have to classify by one parameter - coat colour. The mathematics of this approach does not allow bringing together different parameters.

This approach is as stupid as possible for Forex decisions. Imagine, classification by one parameter is reduced to making a decision "to buy" or "not to buy". Then you can make 2 nodes in the Kohonen map and it will be quite funny. Of course, there are mastadonts who will make 10 thousand nodes and will look at this map with lust, saying, ah, how it is beautifully coloured.

Here is an example with the period and shift of a standard MT5 Expert Advisor - a separate Kohonen map (network?) for the smoothing period and a separate one for the shift. You sit and think what to do with it.

A multilayer perseptron is a black box, for which, if everything is done correctly, you need to input different parameters and at the output you can get an unambiguous answer - more than the threshold (answer "yes") or less than the threshold (answer "no"). This suits me better.

After reading several books on the topic of machine learning, I noticed one idea that always repeats itself: There is no single template for creating a neural network. Each task requires an extremely individual study of the data, preparation of the data, finding the structure of the network, and tuning that network. In other words, there are options that are not suitable for Forex and for making a "buy" or "don't buy" decision. I believe Kohonen's maps are not suitable for this.

Although we talented people are often wrong, as mistakes are the main strength of talent.