Neuromongers, don't pass by :) need advice - page 2

 

/deleted by the moderator/

TheXpert:
Echo Network....


http://www.scholarpedia.org/article/Echo_state_network

It?)

 
It is :) . Maybe I'll put it out to the public in a little while.
 

Evra.

For fun -- a test run over a year's time.


 

The question is not to provoke, but just to ponder:

If you get a similar schedule as the topstarter's, but only through MAs - 25 months optimization, 1 month OOS. Put all the forwards together like this. Would it be a fit?

Answer options:

  1. There will be a fit. Then why replacing MAs with NS is not a fit?
  2. It will not be a fit or unknown. Then the same can be said for the NS option.
 

hrenfx:

Will it be a fitting?

What kind of fit? Who fits what and where? Folded forwards are folded forwards.

I'm not interested in opinions about graphics at all, and about me personally by the way :), I asked a specific question to specific people.

If you have something to say on the subject, say it, if not, pass it by.

 
TheXpert:
What fitting? Who is fitting what and where? Folded forwards are folded forwards.

I will not voice my opinion. You either see it or you don't.
 
hrenfx:
You either see it or you don't.
Well put :) . The same thing can be repeated in response.
 

If such beautiful graphs are coming out - test them in real time. I am also interested to see the result.

i.e. first prepare the NS and then run the EA. and do not change it for a month.

 
hrenfx:

If you get a similar schedule as the top-starter, but only through MAs - 25 months optimization, 1 month OOS. Add up all the forwards in this way. Would it be a fit?

Answer options:

  1. There will be a fit. Then why replacing MAs with NS is not a fit?
  2. It will not be a fit or unknown. Then the same can be said for the NS variant.

Will it get a similar graph? It's possible of course...

The term "fitting", generally incorrect, the same wiki doesn't know it at all (although it knows overfitting :)), it would be correct to say training and retraining (overfitting, overlearning). And here's the point:

- Any training is a fit, no matter how you slice it, but whether the set of parameters obtained by this training is a "fit" in the bad sense of the word (overfitting, what we here call simply and colloquially a fit) can be checked just by means of OOS. If there is a result of SOC, and sufficiently stable for 8 years (even if not too productive) it means that it is not a fitting. I will choose the 2nd variant of the answer. And the system can be used one way or another, what else do we need?

Now show a similar result from the mashups? I don't think it will work. The mathematical apparatus (ability to "deduce" and "generalise") of such a TS is weak.... If it does work, not the fact that it will work again, I think there is no stability to speak of.

 

A very big request to the moderators to clean up the thread.

To TheXpert: As far as curve enhancement is concerned, would it make sense to think about some other way of preparing the data, or what is given as input? It's just that if we're just talking about price smoothing, maybe it makes sense to predict not the price but its logarithmic increments (converted back to price).

Reason: