Discussion of article "Thomas DeMark's Sequential (TD SEQUENTIAL) using artificial intelligence" - page 4
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
So... please explain a couple of things, for example, I take a toad file, is it a trained model? There are three methods, there are no references to references and in the methods themselves coefficients are shardcoded, but they take 5 chips, and there were 15, what chips did you use or how did you reduce the dimensionality of 15->5?
And do I need *.vrm binary for something, if I just need to run a test?
Yes in toad and mkul are already trained models and the result of training is specified at the very bottom, also at the bottom are specified exactly those inputs that should be used, substituting them into variables v0..... vN. Don't mix up the order, it's important, each input must be in its place. vmr has caught on :-))
That is, in addition to building the model, the optimiser also selects the inputs to be used. I've thrown a training file in which the inputs are labelled by letters, you can match them there yourself.
Yes in toad and mkul already trained models and the result of training is specified at the very bottom, also at the bottom are specified exactly those inputs that should be used by substituting them into variables v0..... vN. Don't mix up the order, it's important, each input must be in its place. vmr has caught on :-))
That is, in addition to building the model, the optimiser also selects the inputs to be used. I have thrown a training file in which the inputs are labelled by letters, you can compare them yourself.
I must have got something wrong, but so far it turns out to be a little worse than random, by almost 2%.
And can you also provide the set you tested with feature labels for the sake of purity of the experiment? You didn't run the whole test set, did you?
Toad - project attached.
I'm probably messing something up, but so far it's working out slightly worse than random, at almost 2%
Can you also provide the set you tested with feature labels for the sake of purity? You didn't run the whole test set, did you?
Toad - the project is attached.
I admit I didn't quite understand the point of the post. Yes, I took the first 50 lines of your Traine file, which you sent. Now what you need to do????
I admit to be honest 65000 lines I will hardly run, but acceptable 100 or 200 quite well, but only the model will be built until morning. So what needs to be done????
I admit I didn't quite get the point of the post. Yes I took the first 50 lines of your Traine file that you sent. Now what do I need to do????
I admit honestly 65000 lines I will hardly run, but acceptable 100 or 200 quite well, but only the model will be built until morning. So we need to do????
I'm getting an accuracy of 48.2%.
I need the samples you checked, I want to understand whether I've got it wrong or not.
* Sensitivity of generalisation abiliy: 82.92682926829268%* Specificity of generalisation ability: 47.82608695652174%* Generalisation ability: 70.3125%* TruePositives: 34 * FalsePositives: 7* TrueNegatives: 11* FalseNegatives: 12* Total patterns in out of samples with statistics: 64 - Where are they????String testPath = "vrmtest/test.csv"; double[][] inputs = FileIO.parseCsvFile(testPath, ";", "W","E","R","T","Y","X"); double[][] outputs = FileIO.parseCsvFile(testPath, ";", "V"); double accuracy = 0; int all = 0; for (int i = 0; i < inputs.length; i++) { double[] x = inputs[i]; double[] y = outputs[i]; int predict = Model.getTernaryClassificator(x[0], x[1], x[2], x[3], x[4], x[5]); if (predict!=0) { if (predict*y[0] > 0) accuracy++; all++; } } accuracy = accuracy / all; System.out.println(accuracy);I'm getting an accuracy of 48.2%.
I need the samples you checked, I want to know if I've got it wrong or not.
MMMM. The thing is that the file that I sent for training, it is divided into two samples, one for testing and the other for testing. It is divided randomly every time, here you need to look at the optimiser itself, after dividing the sample the training and testing is performed on the selected area. And for classification tasks the order of arrival of patterns is not important. And as far as I know, it does the division up to 100 times, as I remember Yuri said. That is, I uploaded a file with 50 data, it divided them into two sections and trained the result, then divided them again, etc. You need to read its description here There is a section about splitting the sample, how it happens. You probably split it in half with 50% at the beginning and the test at the end, but that's not true. The training sample is divided according to a different principle, not by the time of arrival. For prediction tasks, the order of arrival of the patterns is IMPORTANT. For classification tasks it is not.
So it goes like this..... Hm....
But it would be interesting to see the result of the model's work outside the sample, further, as they say, and to see the statistics on this section. On data that the network hasn't seen. That's what's interesting.
Dear colleagues, is it possible to make and check in neuronka for example 1 pattern, it would be interesting to see how it all looks like, and what will turn out in general.If it is possible to do so, I will post a simple chart where the pattern figure will be clearly indicated, of course it works on any market and pair.If yes, I will send you a screen if it is possible to run on the main pairs, and also neuronka on version 5 or 4 you use.
Head-on -
On the toxic dataset -
trend=67.1%. test=67%.
Oh! That's already true! Even 1.3% more than mine))) But you and on numerai surpassed me I remember, respect to you, you know)))))
I hope you learnt it so not on 50 samples? Otherwise I won't fall asleep))))
And they were warned several times))))
------
In front-
On the dataset toxic -
trend=67.1%. test=67%.
It's just an assumption. And the danger of using software always implies such. Try to train your AI but with the above recommendations. There's nothing to guess. Just take Sequenta, save the input data and train your AI, to see the result trained on an alternative AI will be much more interesting.