MetaTrader 5 Python User Group - how to use Python in Metatrader - page 83

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Right - wrong prepared....
Where can I read about this? I prepare data for the NS according to my perception of what is important and what is not.
One thing that puzzles me is whether the same type of data should be collected "in a pile" or added to as it comes in?
From which side should the data be collected: from the "older" or from the "newer" ones?
Right - wrong prepared....
Where can I read about this? I prepare data for the NS according to my perception of what is important and what is not.
One thing that puzzles me is whether the same type of data should be collected "in a pile" or added to as it comes in?
From which side should the data be collected: from the "older" or from the "newer"?
In the MoD thread ask, someone will answer. This is the connector topic
The trouble is that normalisation is a lost cause altogether!
Let me explain. There are some data A, B, C...
They are different in terms of significance and so on. Everybody (google) says that normalization should be done by columns (A-A-A, B-B-B, C-C-C) and not by rows. This is logically understandable.
But when new data appears for "prediction" HOW to normalize it if it's only ONE row? And any term in that row can go beyond normalisation on training and test data?
And normalization by strings has no effect!
Actually, after checking these nuances, I had this "cry of the soul" ))))
During normalization the coefficients are saved. To avoid out of range, we should take a big chunk of history and normalize, then apply these coefficients to new data
on non-normalised data, the grid will not learn, or will learn poorly. That's their nature.the coefficients are retained during normalisation. To avoid out of range, we need to take a big chunk of history and normalize, then apply these coefficients to new data
It will not learn from non-normalized data, or it will learn poorly. That's their peculiarity.All this is logical and understandable, but the grid is being trained! Besides there is information that using non-normalized data is more complicated for learning, but it's not critical.
And how not to go out of the ranges? For example, there is a price. There is a range of prices on the training and test data - take 123-324. But the price goes up to 421. How does it fall into that same range?
But we're getting away from the heart of the matter - why, with normal training and testing, is the prediction anything at all?
Dear friends, once again my skis aren't moving... I'm asking for help.
I decided to sketch out a little tester to test the trained network prediction.
Everything's fine here.
And the next thing you know...
swear
What's wrong?
Having searched the web and looked at the article, on the basis of which I wrote my code, I came to a disappointing conclusion that every author of any article "for beginners" is bound to forget to mention something important...
And here it turned out to be thatStandardScaler is used for teaching the network.But the article does not say a word about what it is and why it is needed.
Moreover,StandardScaler is standardization. Moreover, I want to know how I may implement the same standardization for a single input vector and even less.
Even worse, the "standardization" is carried out by columns from dataset! No, well, for just statistics, that's fine. But for forecasts, it's "***hole"! When new data arrives, do I have to retrain the network just to get the new data in the "standardisation" range?
Bullshit!
By the time this "new network" is trained, the situation may have already changed drastically. So, what the f*ck is the point of it?
So much for Python with a bunch of "sharpened" libraries....
I'd be very grateful if you could change my mind.
P.S. I just want to believe that I didn't waste my time on Python for nothing.
Having searched the web and looked at the article, on the basis of which I wrote my code, I came to a disappointing conclusion that every author of any article "for beginners" is bound to forget to mention something important...
And here it turned out thatStandardScaler is used in training the network.But the article does not say a word about what it is and why it is needed.
Moreover,StandardScaler is a standardization. Moreover, I want to know how I may implement the same standardization for a single input vector and even less.
Even worse, the "standardization" is carried out by columns from dataset! No, well, for just statistics, that's fine. But for forecasts, it's "***hole"! When new data arrives, do I have to re-train the network just to get the new data in the "standardisation" range?
Bullshit!
By the time this "new network" is trained, the situation may have already changed drastically. So, what the f*ck is the point of it?
So much for Python with a bunch of "sharpened" libraries....
I'd be very grateful if you change my mind.
P.S. I just want to believe that I didn't waste my time on Python for nothing.
(I can barely make it out. )))
But now I have another question (for which I started it all):
When I trained the network I got the following results
In other words - the result is bargain!
I started my tester. I got such results
Well, tell me exactly where you can see that the network is trained to 98% correct results????
Hello, reading a few pages from the discussion didn't found anything concrete about the following question :
- Is there anything currently working like MetaTraderR or MetaTrader5 packages for MT and R integration ?
Cheers
Sorry, I'll continue my epic... )))
After gaining a little more knowledge from the same google, I came to conclusions:
By fulfilling these two conditions, I got a noticeable reduction in the learning curve of the network. In addition, I found that
Plus there was another question: what should be the response of the network?