In my case it does like i wrote, observe the current state and either returns true / false depending whether or not it recognizes what is
has been trained on. There is no probability involved as i am not using softmax activator. It does not predict. Basically it is a boolean
classifier.

BTW, posting .ex5 is not conform forum rules.

Sorry, didn't know. Are you sure? Then why after clicking "attach file" does it say

Attach up to 32 files maximum 16 Mb per upload (.gif .png .jpg .jpeg .zip .txt .log .mqh .ex5
.mq5 .mq4 .ex4 .mt5 .set .tpl) ?

----

As for the fishes and the birds: even if your labels don't need future data, which of course isn't a general necessity (can you give an example
of such a label?), I guess you still make trading decisions based on the results, because you assume that they are relevant for the near
future.

This seems pretty obvious. If your results were irrelevant, you would ignore the results. But the results could be ignored, you wouldn't
calculate them in the first place. You call it otherwise, but the prediction is still implied (semantics...).

Only condition: the activation function needs to be a non-linear one.

How about this function (MQL5)?

int myfn(int x, int y, int z)
{
srand(x);
int r=0;
for(int i=0; i<y; i++)
{
if (rand()>z) r++;
}
return r;
}

srand() initializes the pseudo-random number generator to produce the same pseudo-random sequence when using the same input.

rand() returns the next pseudo-random number in sequence, with its output always ranging between 0 and 32767.

Even though this function uses a pseudo-random
number generation, because the generator is always initialized at the start of the function with a specific seed (x), it always
produces the same output, given the same 3 input parameters.

The output will always be somewhere between 0 and "y".

All inputs and the output are integers.

PS. I could have used GetTickCount() as the input to srand, but wanted to give you a fair chance ;)

MathSrand() initializes the pseudo-number generator to produce the same random sequence when using the same input.

MathRand() returns the next pseudo-random number in sequence, with its output always ranging between 0 and 32767.

Even though this function uses a pseudo-random
number generation, because the generator is always initialized at the start of the function with a specific seed (x), it
will always produces the same output, given the same 3 input parameters.

The output will always be somewhere between 0 and "y".

All inputs and the output are integers.

Wow! You were creative. This is a hard one, but it should be possible. One problem I see is that this function is as far away from being continous as
possible, so during training it will be a lot harder to estimate how far away from the truth the network still is, until we actually get there.
You're sure you want this for the example?

I guess with a more continous function you could better see the error declining.

Also, this function can't be expressed as f(x), because it re-uses the last result, so it's recursive, like an infinite loop of
f(f(f(f(f(x...))))... So in order to realistically remodel the rand() function, I'd need to be allowed to refeed the last rand() result as
an input (the rand function itself has this knowledge, too).

Something fancy with the before mentioned sin/cos/exp...etc. would probably better to see the process at work. So I don't "like" the example, but I
guess that's part of the challenge...

Wow! You were creative. This is a hard one, but it should be possible. One problem I see is that this function is as far away from being
continous as possible, so during training it will be a lot harder to estimate how far away from the truth the network still is, until we
actually get there. You're sure you want this for the example?

I guess with a more continous function you could better see the error declining.

Also, this function can't be expressed as f(x), because it re-uses the last result, so it's recursive, like an infinite loop of
f(f(f(f(f(x...))))... So in order to realistically remodel the rand() function, I'd need to be allowed to refeed the last label as an
input.

Something fancy with the before mentioned sin/cos/exp...etc. would probably better to see the process at work. So I don't "like" the example,
but I guess that's part of the challenge...

Well, it's your theory, so ... feel free to configure your ANN any way you want.

And ... it wouldn't be a challenge if it was a simple continuous function.
Would it?

PS. The market isn't a simple continuous function either, so we aren't very far from the subject of this topic.

Brian Rumbles: I think
Enrique's idea of using a NN to classify the condition if it recognizes it or not sounds like it can be very useful/ profitable

While the accuracy is quite good out of sample, not all entries will be profitable, so all the classical problems remain. When to take
profit, and when to write off. The only real benefit is that it is self optimizing.

Enrique Dangeroux:Well.. I do not call a fish a bird.

In my case it does like i wrote, observe the current state and either returns true / false depending whether or not it recognizes what is has been trained on. There is no probability involved as i am not using softmax activator. It does not predict. Basically it is a boolean classifier.

BTW, posting .ex5 is not conform forum rules.

Sorry, didn't know. Are you sure? Then why after clicking "attach file" does it say

Attach up to 32 files maximum 16 Mb per upload (.gif .png .jpg .jpeg .zip .txt .log .mqh

.ex5.mq5 .mq4 .ex4 .mt5 .set .tpl) ?----

As for the fishes and the birds: even if your labels don't need future data, which of course isn't a general necessity (can you give an example of such a label?), I guess you still make trading decisions based on the results, because you assume that they are relevant for the near future.

This seems pretty obvious. If your results were irrelevant, you would ignore the results. But the results could be ignored, you wouldn't calculate them in the first place. You call it otherwise, but the prediction is still implied (semantics...).

Yes, don't ask me why. Same as we are able to post about brokers, but not allowed.

Through private message it would be no problem.

Chris70:Yes.

Only condition: the activation function needs to be a non-linear one.

How about this function (MQL5)?

srand() initializes the pseudo-random number generator to produce the same pseudo-random sequence when using the same input.

rand() returns the next pseudo-random number in sequence, with its output always ranging between 0 and 32767.

Even though this function uses a pseudo-random number generation, because the generator is always initialized at the start of the function with a specific seed (x), it always produces the same output, given the same 3 input parameters.

The output will always be somewhere between 0 and "y".

All inputs and the output are integers.

PS. I could have used GetTickCount() as the input to srand, but wanted to give you a fair chance ;)

NELODI:How about this function (MQL5)?

MathSrand() initializes the pseudo-number generator to produce the same random sequence when using the same input.

MathRand() returns the next pseudo-random number in sequence, with its output always ranging between 0 and 32767.

Even though this function uses a pseudo-random number generation, because the generator is always initialized at the start of the function with a specific seed (x), it will always produces the same output, given the same 3 input parameters.

The output will always be somewhere between 0 and "y".

All inputs and the output are integers.

Wow! You were creative. This is a hard one, but it should be possible. One problem I see is that this function is as far away from being continous as possible, so during training it will be a lot harder to estimate how far away from the truth the network still is, until we actually get there. You're sure you want this for the example?

I guess with a more continous function you could better see the error declining.

Also, this function can't be expressed as f(x), because it re-uses the last result, so it's recursive, like an infinite loop of f(f(f(f(f(x...))))... So in order to realistically remodel the rand() function, I'd need to be allowed to refeed the last rand() result as an input (the rand function itself has this knowledge, too).

Something fancy with the before mentioned sin/cos/exp...etc. would probably better to see the process at work. So I don't "like" the example, but I guess that's part of the challenge...

Chris70:Wow! You were creative. This is a hard one, but it should be possible. One problem I see is that this function is as far away from being continous as possible, so during training it will be a lot harder to estimate how far away from the truth the network still is, until we actually get there. You're sure you want this for the example?

I guess with a more continous function you could better see the error declining.

Also, this function can't be expressed as f(x), because it re-uses the last result, so it's recursive, like an infinite loop of f(f(f(f(f(x...))))... So in order to realistically remodel the rand() function, I'd need to be allowed to refeed the last label as an input.

Something fancy with the before mentioned sin/cos/exp...etc. would probably better to see the process at work. So I don't "like" the example, but I guess that's part of the challenge...

Well, it's your theory, so ... feel free to configure your ANN any way you want.

And ... it wouldn't be a challenge if it was a simple continuous function. Would it?

PS. The market isn't a simple continuous function either, so we aren't very far from the subject of this topic.NELODI:Well, it's your theory, so ... feel free to configure your ANN any way you want.

And ... it wouldn't be a challenge if it was a simple continuous function. Would it?

PS. The market isn't a simple continuous function either, so we aren't very far from the subject of this topic.Apart from "gaps": continuous, not simple.

I'll try my best ;-)

Chris70:Apart from "gaps": continuous, not simple.

I'll try my best ;-)

Good luck.

Btw ... are you saying that you feed raw price data into your ANN? I mean ... your inputs are the actual prices?

NELODI:Good luck.

Btw ... are you saying that you feed raw price data into your ANN? I mean ... your inputs are the actual prices?

before (fractional) stationary transformation and normalization: yes

Brian Rumbles:I think Enrique's idea of using a NN to classify the condition if it recognizes it or not sounds like it can be very useful/ profitable

While the accuracy is quite good out of sample, not all entries will be profitable, so all the classical problems remain. When to take profit, and when to write off. The only real benefit is that it is self optimizing.