Discussion of article "Visual evaluation of optimization results" - page 3

 
Aleksandr Slavskii #:

Only if you write your own, some other criteria that will differ in their logic from those that are there.

My imagination is not good, so some of the criteria are very similar to each other, differing not by big nuances.

No, no, I'm talking about graphs inside one criterion.
 
Mikola_2 #:
No, no, no, I'm talking about graphs within a single criterion.

Well, there is no way.

You, when you do optimisation and the tester displays the results, there is a whole bunch of results on the top lines almost identical.

The same thing happens here, just the results are sorted by user criteria.

 
Aleksandr Slavskii #:

Well, there's no way.

When you do optimisation and the tester displays the results, there are also a whole bunch of results on the top lines that are almost identical.

The same thing happens here, but the results are sorted by user criteria.

What if we sort the structure beforehand and store in m_BackBest[x][y].res only the value that does not coincide with the previous value? I tried it - I didn't manage it... )))
 
Mikola_2 #:
What if we pre-sort the structure and in m_BackBest[x][y].res remember only the value that does not match the previous value? I tried it - I didn't succeed... )))

The thing is that there can't be matching values there.

It's just that when you plot it, the values are rounded and you see them as if they are the same, but they are actually different.

Try rounding the values when sorting, then the results will be what you want to see.

 

Thank you for a great article!

In my opinion, the title of the article does not reflect much of its main value - working with fitness functions, although everything in this article is great, kudos to the author!