Machine learning for robots - page 5

 
Ivan Negreshniy:

Of course I've tried it, and not only me, for example in the thread on MO, there are those who did it, repeating the mantra about rubbish on entry and apparently forgetting that the rubbish on the formal output when training with a teacher is not much better, while selection and shuffling of vectors of features does not save from overfitting.

I am trying to mark signals manually, but should the system be equally distributed, or should I mark just inputs with logic?

How does the net cope with non-stationarity? And does it cope with it at all, e.g. the size of one and the same pattern may be either 15 bars or 150?

 
mytarmailS:

I am trying to mark signals manually, but should the system be equally distributed, or should I mark just inputs with logic?

How does the network cope with non-stationarity? And does it cope with it at all, for example, the size of one and the same pattern may be either 15 bars or 150?

Some models are sensitive to the number of signals and require alignment, others do not, I think we can start with a random scaffold and a self-written grid that are quite unpretentious, and as for the size of the pattern, we can take the maximum as a basis.
 
Ivan Negreshniy:

Now for organising and discussing experiments:

  • Any willing author creates templates with trading signals of his strategy and posts them in this thread.
  • I process the templates, create Expert Advisors or indicators, and post them here in compiled form.
  • Everyone else can freely download templates and robots, test them and give their expert opinion.

Why is it so complicated. It can be done in a much simpler way.

Many trades are randomly generated on the history. Many of them are successful, and many are not. On this sample, we teach the system with methods of MO. MO will classify it and find patterns.

I did it on a sequence of ~10 thousand trades. Even a simple system with MO is learning well and in the test it shows 80-85% of successful trades. Which is already very strange for a simple MO, as it is simply not able to remember that many trades - the only explanation is that MO actually finds and generalises some patterns.

Yes, but all these miracles are observed only on a training sequence).

 
Yuriy Asaulenko:

Why be so complicated. It can be done in a much simpler way.

Many trades are randomly generated on the story. Many of them are successful, many are not. On this sample, we teach the system using the methods of MO. MO will classify it and find patterns.

I did it on a sequence of ~10 thousand trades. Even a simple system with MO is learning well and in the test it shows 80-85% of successful trades. Which is already very strange for a simple MO, as it is simply not able to remember that many trades - the only explanation is that MO actually finds and generalises some patterns.

Yes, but all these miracles are observed only on a training sequence).

Well, yes, at full overfitting the training sequence may be 100%, but the task is not to memorize, but to generalize and achieve results on forward.

That's why in the experiment it's suggested to train not on random or all possible profitable deals, but on deals (signals) filtered out from readings of any indicator.

Thus, all signals will already contain a formalized dependence with BP, and the neural network will only have to determine it and establish a pattern of excluding bad signals that are not included in the sample.

 
Yuriy Asaulenko:

Why be so complicated. It can be done in a much simpler way.

Many trades are randomly generated on the story. Many of them are successful, many are not. On this sample, we train the system with the methods of MO. MO will classify it and find patterns.

I did it on a sequence of ~10 thousand trades. Even a simple system with MO is learning well and in the test it shows 80-85% of successful trades. Which is already very strange for a simple MO, as it is simply not able to remember that many trades - the only explanation is that MO actually finds and generalises some patterns.

Yes, but all these miracles are only observed on a learning sequence).

your knowledge of MO still tends towards zero, unfortunately

so your internal neural network can't yet come to a consensus: what is the point of doing it at all?
 
Maxim Dmitrievsky:

your knowledge of the MoD is still, unfortunately, at zero.

That's why your inner neural network can't come to a consensus yet: what's the point of doing it at all?

Don't get so excited, Maxim.) Everyone already knows that the only thing tougher than you are balls.

 
Ivan Negreshniy:

Well, yes, on training one with full overfitting it may be 100%, but the task is not to memorize, but to generalize and get results on a forward market.

This is why in the experiment it's suggested to train not on random or all possible profitable deals, but on deals (signals) filtered out from readings of any indicator.

Thus, all signals will already contain a formalized dependence with BP, and the neural network will only have to determine it and establish a pattern of excluding bad signals that are not included in the sample.

With a training sample dimension much larger than the NS dimension, retraining is almost unrealistic.

With small samples, retraining is achieved on a count of times. Let's say you were given 200 real trades.

 
Yuriy Asaulenko:

With a training sample dimension much larger than the NS dimension, retraining is almost unrealistic.

With small samples, retraining is achieved on a count of times. Suppose you are given 200 real trades.

It depends on data, parameters and type of model, for example in trees the number of levels is dynamically increased, as in my network the number of neurons, although there is a limit of conditional density of information, but it is determined only by the training sample, you can do pruning, committees, etc.

And overlearning, it is not necessarily remembering all samples, it is simply remembering them without generalization, for example in the presence of contradictory information that is replaced and cannot be averaged.

 
Ivan Negreshniy:

It depends on data, parameters and model type, for example in trees the number of levels increases dynamically, as well as in my network the number of neurons, although there is a limit of conditional information density, but it is determined only by the training sample, you can do pruning, committees, etc.

And overlearning, it is not necessarily to memorize all samples, it is simply to memorize them without generalization, for example in the presence of contradictory information that is replaced and cannot be averaged.

Why don't we download some super-duper strategy from Market, run it in the tester (we trust the tester)), and apply the results to NS, RF, SVM or something else. And we don't have to wait - we try it on the demo and see the results.

 
Yuriy Asaulenko:

Why not do the following for experiment: download some super-duper strategy from Market, run it in the tester (we trust the tester)), and submit the results to NS, RF, SVM or something else. And we don't have to wait - we try it on the demo and see the results.

So you can not, compiled with protection moderator will reject because you need the source, and the source will reject because it is necessary to protect the rights of the seller - vicious circle works :))

But there's nothing surprising here, because the legal status of robots of different professions and in the object environment, so far, is poor...

Reason: