Neuro-forecasting of financial series (based on one article) - page 4

 
alexeymosc:


Do you think that on a trend, two wands will give the same result? Have you tried predicting the direction of price movement a day ahead with a dab? The result will be 50/50 (maximum, 40/60) on a sample of at least 150 days, whether it is trending or flat. I understand if we hold the deal from crossover to crossover, in a trend. We are talking about something else in the article.

PS: If I'm wrong, throw a stone at me. It's not the first day of marriage myself.


I was writing about the same piece of data on which the research was done in the paper.
 
alexeymosc:


I haven't either, although as a user I've been intimate with neural networks for a number of years.

It's not about that. Simply to say that "this is a trend" is not correct in this context. And predicting the candlestick colour one step ahead with nearly 100% probability is, in my opinion, almost fantastic. But where is the constructive criticism of the article? Or is there not enough data for criticism? Or maybe the author of the article just fooled everyone, even his esteemed scientist.

You should at least read the article, by the way. There, in addition to fantastic results, interesting techniques are used. I have not tried many of them myself.


What kind of constructive criticism is needed if it is a diploma-level paper, i.e. not a scientific research, as in a PhD thesis, but a demonstration of the student's ability to apply neural networks and fuzzy logic.

There is no need to fool anyone, heads of diplomas themselves are going for it, everything should look nice, but everyone knows that this is not the purpose of the thesis. The lecturers in the committee just sit and smile silently, listening to the part about the economic (and other) efficiency of the project.

 
nikelodeon:

I don't care what anyone out there predicts, I confess I haven't even read the article.

Well, go read it first.

The text clearly shows that the author is no amateur and can well distinguish between training and cross-validation sampling and testing.

 
Integer:


What kind of constructive criticism is needed if it is a thesis-level work, i.e. not scientific research, as in a PhD thesis, but demonstration of student's ability to apply neural networks and fuzzy logic.

There is no need to cheat anyone, the diploma supervisors themselves go for it, everything has to look nice, but everyone knows that this is not the purpose of the thesis. The teachers in the committee just sit and smile silently, listening to the part about the economic (and other) efficiency of the project.


I see. Maybe. So one explanation is a tight fit: the test plot is fully or partially the same as the training plot. I think too brazen a lie for a basically savvy student (you can tell from the article) and also a copy of the dissertation has been accepted for publication on an academic portal. Who among neural network enthusiasts would not recognize such a lie? IMHO.
 
Integer:

I don't understand your position, do you think that this work shows reliable results? Do you think that this is possible?

Well if you consider the network training with the "same as yesterday" result to be credible.... I personally do not....
 

I have not read the article, I am not interested in NS (any more), but at one time - I repent, I studied it. I've got a long formula and used it to make a prediction for the piece of data, which I haven't fed to training (I'm so clever, I guess). The formula is written in yoexcel and after dragging it down to the end of the data I get a "prediction". Next, in a separate column, the difference with the fact and.... totally freaking out - 98% match!!!! The difference is no more than a few points!!! I spent a couple of days with avid eyes on the results... I was teaching him this way and that, giving him more data or less... I knew with my brain it couldn't be by definition, but how come the numbers didn't drop below 90%?!

When I cooled down, of course I understood it: the formula cleverly calculated not more than a few points of "addition" to the previous result. After that (that RA myself) for "more accurate modeling" I took the actual value instead of the forecasted one - like in real life I will get a real quote and forecast again from it :)))))))))))))

....

Who hasn't guessed - the next forecast again adds half a point to previous actual value, and all the same in what direction - the next fact again "corrected the forecast" and again plus 1-2 points from the formula.... As a result, the forecast fluctuated around the fact did not crawl away from it giving crazy 98%

When I stopped substituting the fact and started taking the previous "predicted" price - I got a slightly wobbly but practically straight line, which went, as you understand, nowhere :))))


Just in case I ask everyone who wants to convince me that I "just can't cook cats" and that NS is able to make predictions - don't waste your time (I'm not interested in this question in any way). I've written to those who, like me, are still fascinated by 100% accurate predictions - remember the phrase: "I can feel I'm getting fucked, but I can't see where. Look not for confirmation of the fiction, but for its refutation. If you do not find it - you will be pointed out to it rather harshly by the deposit being drained by your net or a lot of wasted time on the NS "training".

 
nikelodeon:

Well if you consider the network training with the "same as yesterday" result to be credible.... I personally don't....


I don't get it. If I do... then what?

What does "just like yesterday" mean, is it some kind of special term for some special neural network adepts? I don't belong to those adepts, I don't understand the meaning of that phrase. Sorry. In short, it is not clear what you want to say. But the fact that you think the results are unreliable is clear. And you know, I also think they are unreliable and I've already written about it in this thread more than once.

 
f.t.:

I have not read the article, I am not interested in NS (any more), but at one time - I repent, I studied it. I've got a long formula and used it to make a prediction for the piece of data, which I haven't fed to training (I'm clever like that). The formula is written in yoexcel and after dragging it down to the end of the data I get a "prediction". Next, in a separate column, the difference with the fact and.... totally freaking out - 98% match!!!! The difference is no more than a few points!!! I spent a couple of days with avid eyes on the results... I was teaching him this way and that, giving him more data or less... I knew with my brain it couldn't be by definition, but how come the numbers didn't drop below 90%?!

When I cooled down, of course I understood it: the formula cleverly calculated not more than a few points of "addition" to the previous result. Then (by the RA that I did) for "more accurate modeling" I took the actual value instead of the forecasted one - like in real life I will get a real quote and forecast again from it :)))))))))))))

....

Who hasn't guessed - the next forecast again adds half a point to previous actual value, and all the same in what direction - the next fact again "corrected the forecast" and again plus 1-2 points from the formula.... As a result, the forecast fluctuated around the fact did not crawl away from it giving crazy 98%

When I stopped substituting the fact and started taking the previous "predicted" price - I got a slightly wobbly but practically straight line, which went, as you understand, nowhere :))))


Just in case I ask everyone who wants to convince me that I "just can't cook cats" and that NS is able to make predictions - don't waste your time (I'm not interested in this question in any way). I've written to those who, like me, are still fascinated by 100% accurate predictions - remember the phrase: "I can feel I'm being fucked, but I can't see where. Look not for confirmation of the fiction, but for its refutation. If you do not find it - you will be pointed out to it rather harshly by the deposit being drained by your net or a lot of wasted time on the NS "training".


What you have described is called a "shift", I don't know who invented the name, but the essence is correct. If you apply NS to raw quotes and try to approximate the function one step ahead (i.e. predict the future price value), then you will get the last known price value + 1-2 points and you will get a 50/50 hit in the price direction. That's probably all passed, but it's more interesting now.)
 
Integer:


I don't get it. If I... then what?

What does "just like yesterday" mean, is it some kind of special term for some special neural network adepts? I don't belong to those adepts, I don't understand the meaning of that phrase. Sorry. In short, it is not clear what you want to say. But the fact that you think the results are unreliable is clear. And you know, I also find them unreliable and I have written about it in this thread more than once.


Actually, it's overtraining. I'm surprised you don't know that. The conventional wisdom is that a network is overtrained when it starts working the way it did yesterday. That is, it doesn't highlight key points in the input, but starts producing the same signal as yesterday.....
 
alexeymosc:

...and then it gets more interesting.

What can be interesting about it (apart from the task of training your brain)?

No NS can work without retraining (in the sense of learning from new data). The market is changing and the grid has to learn it. The question is: when to start a new training? ;)

And then, what you can "fix" in the grid when it "breaks", change the layers and the number of neurons, another transfer function.... But you will never know exactly what and how and where to change. As long as you don't adjust the mesh to the new market, it won't work. It's the same as if ( Price == Ask ) and see that Ask = 1.2345, while Price turns out to be 1.23449999999 for some reason.

Now imagine your conversation with a possible investor who asks you: "what are you going to do when it stops earning?" guess which answer he likes better?

If the market hasn't changed by that time, I will teach the National computer again and when it learns, I will return it to profit (if the market does not change again by that time).

2) I'll put a debugging seal on it, find an error and correct it

So if you're "out of interest" - you're welcome to do it, but if you're making money? ;)

Reason: