NS + indicators. Experiment. - page 9

 

It's complicated... :( I'll look into it.

 

About the deductor: if you mature and need to extract information about the coefficients of a grid trained in it, it can be done in a perfectly legal way, even in the academic version.

 
Rosh:
Alex-Bugalter wrote (a):
Much respected Rosh & SK, if you are so good at knowing what's good and what's bad, and where it's best to walk.
Maybe you can point out to the uninitiated, which in your opinion is the harm and what exactly is not true in this article?
So many people have been misled, so let's show them the right way.
Or are you just out for a walk?
Anyone can indiscriminately cast aspersions.
And in this article: "Neural networks and time series analysis", is it rubbish written also?

P.s.: And Rosh, for me personally, if it's not too much trouble, what exactly did you mean by: "Written didactically awful"?



The article clearly shows not a desire to convey information, but to show its awesomeness. Therefore, the clarity of presentation is in the third place (all kinds of phrases like Gazprom below 5.6 [because only a dumb person would ask where it is visible] - I will not now return to the article to indicate exactly, fuzzy pictures and so on). Further, the authors marvel at how accurately the network predicts the boundaries of maximum and minimum values of future bar prices and say that only a complete fool would not make money on it - it is a complete profanity. So let them make money on it, if it is so simple. A lot of clever words, you can write something like that after reading a couple of books. I repeat - the purpose of this article is to show how clever the authors are and how well they know the spells when working with these programs and to put the reader down as much as possible who will not understand anything from it and will understand how cool everything is there.


Alex-Bugalter

The article is hilarious... so beautifully predicted... having such an NS... you can buy up a small country :-)

well, the king of a small country wouldn't go on the internet.............................

 
klot:
TedBeer:

www.basegroup.ru is also easily searchable by google

My favourite site :) That's where I got most of my algorithms.

+1
 
There is no silver lining.
The bad news - the site neuroforex.jino-net.ru went bust, as well as another hundred and fifty sites located on this hosting.
The good news - the site moves on other host and will be available soon (after New Year) to address neuroforex.net

Now concerning these articles: Their authors unfortunately are not present at this forum and to defend to itself cannot. And to answer all attacks concerning their creativity.
This is the first

Second. I believe that a forum devoted to trade automation is not a literary circle. So my thoughts expressed in articles will be more thoroughly analyzed from the point of view of working with neural networks, not searching for spelling errors. So I haven't heard any alternatives to what was stated in the article, just blah-blah. What a pity. They just talk about difficulties and impossibility to predict the closing price of the next bar.
The only klot, as always, pleased with a bit of code and sober thinking.
I think the presentation and preparation of input data is the most important problem in working with networks.
To be honest, hand on heart, reading these articles, I didn't notice the authors' pompousness and desire to show how clever they are, etc. etc.
I am interested in any vision and approach to work, any little detail, any carelessly thrown word, which could lead me to a correct understanding of working with NS. Therefore I am inclined to believe that everyone has found in these articles what he or she is looking for.

YuraZ:
Alex-Bugalter

The article is hilarious... all so beautifully predicted... having
such an NS... you can buy up a small country :-)

Well, the king of a small country wouldn't go on the Internet ...

The King would not, and neither should we.
 
njel:
Prival:
klot:

To Prival Thanks for your help!!! But, I'm not going to recognise tanks, thank God, the task is much easier...


I think it was easier with tanks :-). They have already been dealt with, but the forex is not working out. It would be very interesting to find out how the NS would work if we submitted the AMA derivatives.
It may be several. Only on the output of what, also a derivative of AMA?...or several.....

I don't know if these packages make it possible, but I would try the following. Input derived from AMA, and the teacher is a ZigZag. I.e. try to teach NS to recognize pivot points.
 
TedBeer:

About the deductor: if you mature and need to extract information about the coefficients of the grid trained in it, it can be done in a perfectly legal way, even in the academic version. Please do not hesitate to contact us.

I was using Deductor some time ago and I got good results, especially with Kohonen maps, if you use them as predictive block. How do you get the coefficients?
 
Piligrimm:
Some time ago I used Deductor, the results were not bad, especially with Kohonen maps, if you use them as a predictive unit. How do you get the coefficients?

There are at least 2 ways :-) But the easiest one is to look in your .ded file. It's just an xml file and you can find there all the coefficients you need. I just couldn't reproduce the grid in MT because I didn't nomalise the inputs. But I think you can look up the normalisation used in the delphi components on their website. It shouldn't be much different. I'm not going to do that as I'm writing my implementation of Kohonen maps.
 
Piligrimm:
I was using Deductor some time ago and I got good results, especially with Kohonen maps, if you use them as a predictive block. How do you get the coefficients?


I managed to transfer Kohonen's network from Deductor to MT!!! In my particular case - with range expansion and linear normalisation enabled. I don't know if it is enabled by default yet.

The inputs are scaled to a range of [0...1]. So for inputs which have range [-10...10] normalization will look like this: in[j] = (in[j] + 10) / 20
For inputs with range [0...100] - in[j] = in[j] / 100

 
TedBeer:
Piligrimm:
Some time ago I used Deductor, the results were not bad, especially with Kohonen cards, if you use them as a predictive unit. How do you get the coefficients?

I have at least 2 ways :-) But the easiest way is to look in your .ded file. It's just an xml file and you can find all the coefficients you need there. I just couldn't replicate the grid in MT because I didn't nomalise the inputs. But I think you can look up the normalisation used in the delphi components on their website. It shouldn't be much different. I'm not going to do that as I'm writing my implementation of Kohonen maps.
What's the second way?
Reason: