MetaTrader 5 Python User Group - how to use Python in Metatrader - page 82

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
How do I make an offset?
This is the table (Data Frame)
how to make it like this:
For what? Just print it out?
How do I make an offset?
This is the table (Data Frame)
How to make it like this:
Here, foundpandas.DataFrame.shift
The main thing is not to forget to delete the last line, as it will contain rubbish.
Dear Sirs, please advise me on what is wrong with my understanding.
I have built a neural network. Prepared the data.
Trained it.
Result.
And then I don't understand what happens...
predictions = model.predict(X_test[:15])
.
Why such "prediction" results? Expected either 0-0, 0-1 or 1-0....
It's always like this...
When you ask a question, everyone thinks: why bother?
Subjective opinion: 93% of the time you have to google.... 90% of the time it takes to get the question right....
Thanks for the feedback! That's all for now. I'm going to google....
these are the probabilities of classes 1 and 2
2nd has a higher probability, so it's predicted
their sum must be 1, there is some kind of training error here
It should output 1 neuron if it is binary classification. Or softmaxthese are the probabilities of classes 1 and 2
2nd has a higher probability, so it's predicted
their sum should equal 1, there is some kind of training error here
You need 1 neuron per output if it's a binary classification. Or softmaxBinary classification does not imply 1 neuron per output. At least from what I've found...
But the problem is, the picture doesn't change when using other loss functions either!
I'll write a data tester tomorrow with prediction validation. But something tells me the result will be deplorable!
I just can't understand why the "accuracy" is over 96% and the prediction is "like this"...
Maybe I'm doing something wrong?
Binary classification does not imply 1 neuron per output. At least from what I've found...
But the problem is that the picture doesn't change when using other loss functions either!
I'll write a data tester tomorrow with prediction validation. But something tells me the result will be deplorable!
I just can't understand why the "accuracy" is over 96% and the prediction is "like this"...
Maybe I'm doing something wrong?
I guess I have no idea what kind of network constructor it is.
1 neuron does not imply, but an adder should stand and an activation f-e. Usually you put 1 neuron
there may be many reasons. For example, the data is not normalised, not properly prepared, the network is crookedI guess I have no idea what kind of network constructor it is.
1 neuron does not imply, but an adder must be in place and an activation f-e. It's usually 1 neuron.
There may be many reasons. For example, the data is not normalised, not properly prepared, the network is not built correctlyThe problem is that normalization is a lost cause!
Let me explain. There are some data A, B, C...
They are different in terms of significance and so on. Everybody (google) says that normalization should be done by columns (A-A-A, B-B-B, C-C-C) and not by rows. This is logically understandable.
But when new data appears for "prediction" HOW to normalize it if it's only ONE row? And any term in that row can go beyond normalisation on training and test data?
And normalization by strings has no effect!
Actually, already after checking these nuances, I had this "cry of the soul" ))))