Not Mashka's business! - page 10

 

Let's go!

Let's start by testing your machine on the Wiener process. And God forbid it starts predicting!!!

Below are two files, each containing integrated white noise. The vectors are 1000 samples long, they are obtained by cutting in half a vector 2000 samples long. You should train your system on the first vector using any algorithm and give a forecast for every bar of the second vector. You can adapt your algorithm on every bar of the second vector. The resulting predictive vector should naturally be 1000 samples long and be available for analysis. We'll talk more about the method of prediction analysis.

Files:
rnd_1.zip  3 kb
 

Seryoga, I don't have NS, I have Burg's method, which by the way, works much better than your XXXXXX (that's a swear word) neutron network. I don't need your neutered files. You seem to be completely obsessed with these NS.


Just give me ONE big file, at least 2000 in size, and IDENTIFY (write) the area to be tested with the history I want. I need a history of about 1000 samples (minimum 500) and I don't need a training sample. For example, you put out a file of 10,000 samples. At random, but taking into account the history I need, you say that the tested section starts at reference number 7000 and ends at reference 8000. By indexes, it will really be 1001 counts, we'll just have to decide which of them we'll include and which we won't (I'm using ORIGIN=0).


And then we'll get started :o)

Для начала испытаем твою машину на Винеровском процессе. И не дай бог она начнёт предсказывать!!!

Your stubbornness amazes me, I even don't know how to explain you, that what you generate is the same Wiener process (GP), as I am a ballerina. Seryoga, I warn you right away, there is no error in my model, so get ready to enjoy good predictions. In short, your EP is not such a criterion, although ... we'll see.

:о)))

 

Wait a minute, wait a minute.

Is one file with a vector 1000 samples long enough for you to work? If so, take #2 and don't mess with my head.

Do what you want with it, but don't tell me about it and give me a forecast for EVERY readout. That's it.

If it turns out that for the forecast of the 1st count you need a pre-story length (well, think of what), then take it from file number 1 precisely for this purpose, it is there!


Sergey, that's enough talk, give birth already!

 

Terrific. Seryoga, of course, I won't mess with your head, I'll just quote the posts with what I think are important highlighted sentences in order not to strain your neural net:


grasn писал (а): в

...

Just give me ONE large file, at least 2000 in size, and IDENTIFY (write) the area to be tested with the history I want. I need a history of about 1000 samples (minimum 500) and I don't need a training sample. For example, you put out a file of 10,000 samples. At random, but taking into account the history I need, you say that the tested section starts at reference number 7000 and ends at reference 8000. By indexes it will really be 1001 counts, you'll just have to decide which of the bottoms we include and which we don't (I use ORIGIN=0).

...

Neutron:

Wait, wait, wait.

Is one vector file with 1000 samples long enough for your work? If yes, take #2 and don't mess with my head.

Do what you want with it, but don't tell me about it and give me a forecast for EVERY readout. That's it.


Fine, you propose a file with 1000 samples and for each sample from this file I have to make a prediction. But where do I get the data to predict the first count? From the B...B? Or are you suggesting that I kindly sit there and glue these files together, and which one is the first? Exactly which one?


... ...and don't mess with my head.


How could you not!!! I won't interrupt you, I need this test first of all and I will do it on EURUSD, and I'll post the results. And you can have fun with your Wiener process on your own and don't bullshit me - just sit there and paste it yourself. That's all. :о)

 

Well, that's the way it is.

 
Neutron:

Well, everything is so everything.

you were the first to write "everything" :o)))

There will be a result in a couple of days. We'll have to divide the calculation in two parts (by 2 nights).

 
I look at you, Seregi, and rejoice ;)
 
komposter:
I look at you, Seryoga, and I rejoice ;)

Whispering came to work at the factory:

- I need a nazdak!

Master:

- You got a lot of fucking brokers! ....

 

grasn

Third Seryoga, could you elaborate on the Berg method (what it is and how it's mixed with what). I know Berg's method. Unfold this method. So that it is not a black box. Thank you.

 
Prival:

grasn

Third Seryoga, could you elaborate on the Berg method (what it is and how it's mixed with what). I know Berg's method. Unfold this method. So that it is not a black box. Thank you.


It depends on where to calculate the personnel from :o)


Burg - that's correct (his works are indicated) Theoretically you should be familiar with this method, it's connected with filter theory and modifications of this method are used in some clever adaptive filters. In attache, all that I could find yet.



PS: didn't look into it in detail. I really hope, that Northwind will help me, but he disappeared :o))) Or you, Prival will help me to translate it into MQL or at least recreate in MathCAD. And you don't have to do that, it's not my main model, it's optional. Heh, one bar, three bars - you have to think big. But it may be interesting to understand the method in details, again for adaptive filtering, useful ideas may come up and all that... :о)



IMPORTANT:

Be warned, this function can be very wrong. I.e. you won't get earlier published "error cloud" within limits of 1-2 points, it all depends on optimization of input parameters. There was an idea, but I couldn't get my hands on it, now I've implemented it...

Files:
c13s6.zip  71 kb
Reason: