Machine learning in trading: theory, models, practice and algo-trading - page 1138

 
My index may be wrong, butI don't have any effect on my index:

For my observations sharpening more than 1 did not work yet. And other people's accounts / charts with higher rates have not seen. I don't know if I'm wrong.

Aleksey Vyazmikin:

From the description of the formula, it is not clear how to check it, for example, how to calculatethe "arithmetic mean profit for the time of holding the position"?

Maybe it is a small mathematical expectation?

In any case, I noticed that the higher the indicator, the better, and this is not a little, and the main report is written to the file with the indicators that I understand.

Well what can I tell you esteemed gentlemen...

These are the side effects of using black boxes, alien libraries, etc.

I can only offer you to publish equity in CSV of your research and I will tell you what is the correct Sharp Ratio of your models, you can calculate yourself the code attached (python)

import random

import csv

import matplotlib

import matplotlib.pyplot as plt


def rndWalk(length, start, var):

    rndwalk = []

    curent = start

    for _ in range(length):

        curent*= 1 + random.gauss(0,3) * var

        rndwalk.append(curent)

    return rndwalk


def ParseCsv(path, columnName):

    tab = csv.DictReader(open(path))

    price = []

    for row in tab: price.append(float(row[columnName]))

    return price


def diff(ts):

    return [ts[n] - ts[n - 1] for n in range(1, len(ts))]


def SharpRatio(PnL):

    ret = sum(PnL) / len(PnL)

    var = ((sum([(x - ret) ** 2 for x in PnL]) / len(PnL))) ** 0.5

    return len(PnL) ** 0.5 * ret / var


rw = rndWalk(10000,100,0.001)

sr = SharpRatio(diff(rw))

print(sr)


plt.plot(rw)

plt.show()


The SharpRatio code itself is just 3 lines

def SharpRatio(PnL):

ret = sum(PnL) / len(PnL)

var = ((sum([(x - ret) ** 2 for x in PnL])) / len(PnL)) ** 0.5

return len(PnL) ** 0.5 * ret / var


If you send me equity or PnL, I will try to find out what the problem is. I may guess that if PnL is used "unloaded", i.e. with gaps between trades (which is certainly not correct), hence the scaling, I would bet $100 that this is the problem.

 
pantural:

What can I tell you gentlemen...

These are the side effects of using black boxes, foreign libraries, etc.

I can only offer you to publish equity in CSV of your research and I will tell you what is the correct Sharp Ratio of your models, you can calculate the code yourself attached (python)

How do I give you the Sharp Ratio, by the hour by the minute?

 
Aleksey Vyazmikin:

How do you give you equity, by the hour by the minute?

It does not matter, let's do it both ways

By the way, one of the signs of wrong sharps calculation is when the equity at different scales give significantly different numbers of sharps ratios while they should normally be very close
 
pantural:

Never mind, let's do it both ways.

By the way, one of the signs of wrong shuffle calculation is when equities from different scales give significantly different numbers of shuffle ratios, while normally they should be very close

Okay, but a little later - right now all the machines are panting over the fit to the story :)

 
pantural:

Never mind, let's do it both ways.

By the way, one of the signs of wrong Sharp estimation is equities on different scales producing significantly different values of Sharp Ratio while they should normally be very close.

I give a minute-by-minute variant, and I attach the tester's trading report.

It is true, I have improved the indicators a little.

The Sharpe Ratio is now 0.29.

Files:
KS.zip  102 kb
 
Aleksey Vyazmikin:

I give a minute-by-minute variant, and I attach the tester's trading report.

It is true that the indicators have improved a little.

The Sharp Ratio is now 0.29.

Real Sharp ratio = ~3.79

The error of those who tried to calculate your figures is obvious. They simply forgot to scale the ratio of returnee to variation by the square root of the series length

def SharpRatio(PnL):

PnL = [x for x in PnL if abs(x) > 0]

ret = sum(PnL) / len(PnL)

var = ((sum([(x - ret) ** 2 for x in PnL]) / len(PnL)) ** 0.5

return len(PnL) ** 0.5 * ret / var


PS: SR=3.79 is very optimistic, of course if it's not a sweatshop and tested correctly

 
pantural:

real Sharp ratio = ~3.79

The error of those who made the algorithm to calculate your numbers is obvious. They stupidly forgot to scale the ratio of returnee to variation by the square root of the length of the series

def SharpRatio(PnL):

PnL = [x for x in PnL if abs(x) > 0]

ret = sum(PnL) / len(PnL)

var = ((sum([(x - ret) ** 2 for x in PnL]) / len(PnL)) ** 0.5

return len(PnL) ** 0.5 * ret / var


PS: SR=3.79 is very optimistic, of course if it's not a sweat (to some degree) and tested correctly

Thanks for the recalculation!

If it really is an error, then maybe it is worth to inform about it in a special thread, because it is a global error in the terminal?

As for fitting, I have my own approach to MO, I collect leaves from trees, and then look at their effectiveness on the history in the context of the trained and unknown sample, where there was a positive effect on both samples go to the next group for detailed selection and analysis. This is partly a fitting, but with adjustments for the fact that such a "sheet" worked before and works now, and what will happen next - no one knows.

 
Aleksey Vyazmikin:

Thanks for the recalculation!

If it really is an error, maybe it is worth to report it in a special thread, because it is a global error in the terminal?

Yes, the error is undoubtedly, it should be reported, do you mind if I use your report as an example?

Aleksey Vyazmikin:

As for fitting, I have my own approach to MO, I collect leaves from trees, and then look at their effectiveness on the history in the context of the trained and unknown sample, where there was a positive effect on both samples go to the next group for detailed selection and analysis. This is partly a fitting, but with adjustments for the fact that such a "list" worked before and works now, and what will happen next - no one knows.

The fit is everywhere, no matter how you look at it, the question is how to reduce it to an acceptable level.

 
pantural:

Fitting is everywhere, no matter how you look at it, the question is how to reduce it to an acceptable level.

You can't. Any optimization, any tuning, any training is a fitting. You just have to accept it as inevitable and work with it.

The question should be posed differently here. Unfortunately, there are probably no general recipes, and the formulation itself may be different for different systems.

 
Yuriy Asaulenko:

You can't. Any optimization, any tuning, any training is a fitting.

Here the question must be posed differently. Unfortunately, there are probably no general recipes, and the formulation for different systems may be different.

See

do not use numbers that need to be adjusted

Reason: