Taking Neural Networks to the next level - page 33

 
Chris70 #:

3. the post-processing

Let's talk about the scale of our desired labels compared to the outputs of the neurons in the last layer, i.e. the output layer: Considering a standard neuron, the calculation method for updating its value is always the same: inputs times weights --> plus bias --> activation function. Some more fancy neurons like LSTM cells have on top of that their so-called  "gates" (activation gate, input gate, forget gate, output gate) with their individual weight matrices (and bias and an activation function of their own), but the end result is comparable: in the end, a cell's output is always the result of some activation function. Like for any neuron, this is also true for the neurons in the output layer, which is why the range that the output values can be within is dictated by the chosen activation function of the last layer.

This can be a problem, or at least we need to chose the model's parameters in a way to deal with it. If our "true" labels for example are in a range between 0 and 1000 and we have the sigmoid function (just as an example) as the chosen activation function of the last layer, this just doesn't match, because the sigmoid function returns only values between 0 an 1 (or more precisely: sigmoid(x) is between 0.5-1 if x is positive, or 0-0.5 if x is negative). "Doesn't match" in this case means the the network will almost always put out a result of +1 (because more is not possible with sigmoid) and we almost always will end up with a gigantic error between output and label and the network can't do anything against it.

If you take another example, the ReLU function, results can range between 0 and positive infinity. If the labels are very small values, this can also be a not so perfect match, although it could be done.

To summarize: we either need to chose an activation function (for the last layer) whose range of possible results matches with the labels, or we need to rescale the labels before we compute the errors into the backpropagation process. How we scale the labels then essentially depends on the activation function. If we have tanh as last activation function (for example), which can put out values between -1 and +1 one, then a simple min-max scaling method that squashes the labels between -1 and +1 might be the obvious idea, whereas for example normalisation (1=one standard deviation, zero mean) would be a bad idea, because the label range would exceed the output value range.

If we have scaled the labels to fit for backpropagation, then of course we have to do the opposite with the results of forward propagation in order to come up with usable results on the same scale as the labels.

If the labels consisted of a data series that we made stationary, like in our autoencoder example, we need to reverse this, too, of course.

I don't know who you are sir, but my name is TAYEBWA Bruno Businge and this article series may have just turned my life around, more like back on track actually .Today I have decided that I'm going to be the smart ass I used to be (and supressed unfortunately)  again. And my first and most certainly not the last project is understanding AI and some day, am going to pull up this comment on some TV show or something, and I'm going to be "the smart ass billionaire" ,and every one is gonna look to me like am some sort of Magician as I explain how tf I managed to decide exactly what I wanna be and just made it happen. *Note to future watchers* Am not any different from any of you, I just decided to embrace myself (thankfully that means smart) and unleash myself in my full glory, because when you think about it, there's nothing to be afraid of ,"embarrassment" everyone you ever knew is gonna for in the next 100 years, and the people of the future won't give a f about your "embarrassment" so I might as well go down in flames, I want my soul to be proud of in the year 3010 when we populate Mars, knowing that I lived my best life in the few years I had on earth.

Also if a moderator feels tempted to take down this comment, please don't, my life is going to be a huge success story and I know you dont want to stop that right? Thank you in advance for not deleting this comment.

Also future people, I'm listening to the gladiator soundtrack, if you haven't watched it, I don't care what year it is, you have to.
 

Programming a Deep Neural Network from Scratch using MQL Language 

https://www.mql5.com/en/articles/5486

Since machine learning has recently gained popularity, many have heard about Deep Learning and desire to know how to apply it in the MQL language. I have seen simple implementations of artificial neurons with activation functions, but nothing that implements a real Deep Neural Network. In this article, I will introduce to you a Deep Neural Network implemented in the MQL language with its different activation functions, such as the hyperbolic tangent function for the hidden layers and the Softmax function for the output layer. We will move from the first step through the end to completely form the Deep Neural Network.
Programming a Deep Neural Network from Scratch using MQL Language
Programming a Deep Neural Network from Scratch using MQL Language
  • www.mql5.com
This article aims to teach the reader how to make a Deep Neural Network from scratch using the MQL4/5 language.
 
I enjoy reading your thread a lot to get into the topic. Although I have to ad that cats, dogs, speech recognition, the prediction of gender, heart disease or diabetes are very stationary, seasonal examples with lots of inputs that are often very consistent with the results they produce. Stock market on the contrary is very non-stationary, only partially seasonal with inputs which rarely give a clear indication of how a system will behave even in the very near future.

I am not sure if we will get a very high degree of model generalisation by only adding inputs and multiplying them with changing weights but I will keep reading and I hope this will provide some interesting solutions.
 
How is your trading going? If you do nothing different, will you make money consistently with how you are presently trading?

Please don't tell me you will keep doing the same thing and hoping for a different outcome. That never works, which is why so many traders give up.
What I think you are going to like best about it is how it boosts your confidence instantly. I have never traded before with so little fear or stress.
 
Tobias Johannes Zimmer #:
I enjoy reading your thread a lot to get into the topic. Although I have to ad that cats, dogs, speech recognition, the prediction of gender, heart disease or diabetes are very stationary, seasonal examples with lots of inputs that are often very consistent with the results they produce. Stock market on the contrary is very non-stationary, only partially seasonal with inputs which rarely give a clear indication of how a system will behave even in the very near future.

I am not sure if we will get a very high degree of model generalisation by only adding inputs and multiplying them with changing weights but I will keep reading and I hope this will provide some interesting solutions.

You are partly correct, knowledge here presented is basic level. You can get that much understanding from any 3h video tutorial on the web. Any successful trades/short histories that may be presented are mostly conducted in favorable conditions, that is stable are rarely reversing trends.

What next level AI tends to aim for is greater level of abstraction, achieved by skillful insertion of "non linear" operators. These MAY lead to better performing algorithms in any conditions.

 
Pedro Severin #:

I have a question on this subject. I hope to write as clearly as possible.

I have been trading with neural networks (LSTM, to be precise) and when I trained them, the data was normalized.

But if the data is normalized, should I normalize it every time I perform live operations? I currently do it, but now I have my doubts about whether it is a good idea or not.

I mean ... if the data is normalized every time a new candle is formed, every time the values would be different because I am adding a new candle, and if it is normalized, then the candle that had a value of 1 (for example )  could not have it anymore. Isn't there an inconsistency?

According to art yes, you should input the data in the AI the same way as you did while training it. My own experiments with LSTMs showed that they will give a reasonably good looking answer even if you provide it with not processed data.

That itself should lead you to a question "if i have a system that works so haphazardly that doesn't obviously change behavior on "bad" inputs, is it any good"? Realizing just how chaotic markets are, any AI system will give you as much rope to hang yourself as a coin flip.

 

Experiments with neural networks (Part 1): Revisiting geometry

Experiments with neural networks (Part 1): Revisiting geometry

In this article, I would like to share my experiments with neural networks. After reading a large amount of information available on MQL5, I came to the conclusion that the theory is sufficient. There are plenty of good articles, libraries and source codes. Unfortunately, all this data does not lead to a logical conclusion – a profitable trading system. Let's try to fix this.
Experiments with neural networks (Part 1): Revisiting geometry
Experiments with neural networks (Part 1): Revisiting geometry
  • www.mql5.com
In this article, I will use experimentation and non-standard approaches to develop a profitable trading system and check whether neural networks can be of any help for traders.
 
is there profitable NN EA? if so its the end of markets and so unlikely to exist
 
Indanguang Samrow Panmei #:
is there profitable NN EA? if so its the end of markets and so unlikely to exist
No Recommendation is allowed for EAs. Keep searching, I mean change your search keywords.

Edit: Profitable??!
 
_MAHA_ #:
No Recommendation is allowed for EAs. Keep searching, I mean change your search keywords.

Edit: Profitable??!
;D if exist someone?
Reason: