Better NN EA development - page 6

 

Good job.

How about directly passing the parameters without the help of files?

 

Neural Networks Source Code:

Adaline Network - The Adaline is essentially a single-layer backpropagation network. It is trained on a pattern recognition task, where the aim is to classify a bitmap representation of the digits 0-9 into the corresponding classes. Due to the limited capabilities of the Adaline, the network only recognizes the exact training patterns. When the application is ported into the multi-layer backpropagation network, a remarkable degree of fault-tolerance can be achieved.

Backpropagation Network - This program implements the now classic multi-layer backpropagation network with bias terms and momentum. It is used to detect structure in time-series, which is presented to the network using a simple tapped delay-line memory. The program learns to predict future sunspot activity from historical data collected over the past three centuries. To avoid overfitting, the termination of the learning procedure is controlled by the so-called stopped training method.

Hopfield Model- The Hopfield model is used as an autoassociative memory to store and recall a set of bitmap images. Images are stored by calculating a corresponding weight matrix. Thereafter, starting from an arbitrary configuration, the memory will settle on exactly that stored image, which is nearest to the starting configuration in terms of Hamming distance. Thus given an incomplete or corrupted version of a stored image, the network is able to recall the corresponding original image.

Bidirectional Associative Memory - The bidirectional associative memory can be viewed as a generalization of the Hopfield model, to allow for a heteroassociative memory to be implemented. In this case, the association is between names and corresponding phone numbers. After coding the set of exemplars, the network, when presented with a name, is able to recall the corresponding phone number and vice versa. The memory even shows a limited degree of fault-tolerance in case of corrupted input patterns.

Boltzmann Machine - The Boltzmann machine is a stochastic version of the Hopfield model, whose network dynamics incorporate a random component in correspondence with a given finite temperature. Starting with a high temperature and gradually cooling down, allowing the network to reach equilibrium at any step, chances are good, that the network will settle in a global minimum of the corresponding energy function. This process is called simulated annealing. The network is then used to solve a well-known optimization problem: The weight matrix is chosen such that the global minimum of the energy function corresponds to a solution of a particular instance of the traveling salesman problem.

Counter-propagation Network - The counterpropagation network is a competitive network, designed to function as a self-programming lookup table with the additional ability to interpolate between entries. The application is to determine the angular rotation of a rocket-shaped object, images of which are presented to the network as a bitmap pattern. The performance of the network is a little limited due to the low resolution of the bitmap.

Self-Organizing Map - The self-organizing map (SOM) is a competitive network with the ability to form topology-preserving mappings between its input and output spaces. In this program the network learns to balance a pole by applying forces at the base of the pole. The behavior of the pole is simulated by numerically integrating the differential equations for its law of motion using Euler's method. The task of the network is to establish a mapping between the state variables of the pole and the optimal force to keep it balanced. This is done using a reinforcement learning approach: For any given state of the pole, the network tries a slight variation of the mapped force. If the new force results in better control, the map is modified, using the pole's current state variables and the new force as a training vector.

Adaptive Resonance Theory - This program is mainly a demonstration of the basic features of the adaptive resonance theory network, namely the ability to plastically adapt when presented with new input patterns while remaining stable at previously seen input patterns.

Neural Networks Source Code

Files:
8nn-src.zip  286 kb
 
got_fx:
Good job. How about directly passing the parameters without the help of files?

Yes!

You are right!

Based on svm-toy.cpp example we can

write a dll without the text files.

Somebody can help me?

Files:
svm-toy.cpp.txt  11 kb
 
got_fx:
I just thought that I should share my 5cc worth...

I've played a bit with NN's, SVM's, tree ensambles and other bizarre algs (see Flexible Neural Trees Ensemble for Stock Index (ResearchIndex)). (it works for standard problems but I did not develop it to the fullest realizing that I am spending too much time on the algorithm itself).

The key is not in the learning algorithm but rather than that in the predictors.

It took me quite a bit of wasted time to realize this. We cannot expect to find dependencies in data that has none.

One cannot predict the next bar. It's impossible and naive. We can try to capture patterns though. There was a post on Better's board stating the his approach had captured the market cycles. This was from a guy who claims to have experimented with NNs a lot (tried to find the post but couldn't, maybe it was in Russian, which I understand a bit) and I totally agree with him.

I'll give you a hint - think about MACD and try to predict the position of the next extremum (min or max) - an idea that I never really exploited meaningfully because of the little hurdles along the way (proper filtering).

I was fascinated in the past by modern machine learning algs but the truth is, once again, what we are actually trying to do. It's coming with the right approach.

BTW, libsvm is perhaps the best developed one out there but it did not give me the best results. Check mysvm and svmlight.

As for NNs, I mostly trust rbf nets as they tend to catch pockets of data very well. Those who know what a rbf is will understand. Properly configuring those (some sort of clustering first...) and training them is a pain.

I had abandoned the addiction called forex but now I am glad I found this thread.

Maybe we can make a team and really thoroughly work out the problem.

I've always believed that the solution is somewhere in the open but alas, could not find it by myself.

Must admit that I like the above.

This thread has been going for so long and it is quite clear that there are some (many) extremely competent people out there just waiting for some direction about where/how/what to implement this NN idea.

However, it also feels like so much time has been spend keeping the discussion theoretical that we are in danger of loosing the momentum (for this development) if we are unable to find direction soon.

Perhaps it may be time to take all we have learnt from this thread (and others) and try implement something?

As such, may I suggest perhaps:

1.) Let us try get a definition for what it is that we are actually trying to predict.

2.) Let us THEN define what input(s) may be best for the above prediction.

3.) Let us THEN take all we know, and try define a net that may be best suited for the task - and hopefully is reasonably easy to implement/code.

4.) Let us get down to actual experimenting - see what comes out of it.

Just my ideas - any other suggestions/directions out there?

 

what are we trying to predict?

My approach (to the market) is something like this:

When the market is trending - moving in a relative strait line up or down, it is relatively easy to predict where it is going next.

Unfortunately - trend is your friend till it breaks.

For a programmer the implication of the above is that, at some stage, price will turn.

Actually (I think) we only get paid on the turning points. This is where I'd like to open/close my trades.

As such - if I can predict turning points I have the foundation for a very powerful trading system and, what happens in between (the turning points) is of little/no interest to myself.

However - most indicators actually tell about market DIRECTION - in other words, the bits in between (the turning points) and NOT the turning points.

In fact - "predicting" turning points is still up to the individual and for this, we use SR lines, fibs, momentum, the moon or whatever else.

As such I have been working on the premise that a software package that can "predict" turning points (with a reasonabe accuracy) can, to some large degree, extend the individuals ability to make better decisions regarding turning points.

Bottom line - I think that, included in the definition of what we are trying to predict, we must have turning points/levels.

However (important I think) the next logical question would then be:

Do we need define turning points merely in the price domain - and/or in price AND time domain, or how else will we define a turning point?

Personally, I feel that, since I am only paid for accuracy in the price domain, this is where I should concentrate my efforts.

In fact - in my present systems, I do not define time periods - but rather price "periods" in some fashion and, I have no doubt in my mind that because of this, my price line is much smoother that most ordinary (time based) systems out there.

As such - time plays little/no part in any my live systems.

Comments would be appreciated greatly.

 

Hello guys,

Please suggest me what the best algorithm that I should learn so I won't be wasting my time. From what I read here, there are several method using PNN, SVM, GPF, etc. Which one is the best? Maybe experienced person could help me.

Thank you

 

barnix, please, look your message box. i sent a message for you about another stratgey i think you have interest.

 
tiger_wong:
Hello guys,

Please suggest me what the best algorithm that I should learn so I won't be wasting my time. From what I read here, there are several method using PNN, SVM, GPF, etc. Which one is the best? Maybe experienced person could help me.

Thank you

I would suggest:

1.) Do a problem definition - define what it is you'd like the program to do

2.) Do a resource definition - define what (data, indicators etc.) do you have available

3.) Do a solution definition - read about different approaches, and try find the one that match your situation. Then - learn about that one.

Hope this may be of some help

 
jdpnz:
My approach (to the market) is something like this:

When the market is trending - moving in a relative strait line up or down, it is relatively easy to predict where it is going next.

Unfortunately - trend is your friend till it breaks.

For a programmer the implication of the above is that, at some stage, price will turn.

Actually (I think) we only get paid on the turning points. This is where I'd like to open/close my trades.

As such - if I can predict turning points I have the foundation for a very powerful trading system and, what happens in between (the turning points) is of little/no interest to myself.

However - most indicators actually tell about market DIRECTION - in other words, the bits in between (the turning points) and NOT the turning points.

In fact - "predicting" turning points is still up to the individual and for this, we use SR lines, fibs, momentum, the moon or whatever else.

As such I have been working on the premise that a software package that can "predict" turning points (with a reasonabe accuracy) can, to some large degree, extend the individuals ability to make better decisions regarding turning points.

Bottom line - I think that, included in the definition of what we are trying to predict, we must have turning points/levels.

However (important I think) the next logical question would then be:

Do we need define turning points merely in the price domain - and/or in price AND time domain, or how else will we define a turning point?

Personally, I feel that, since I am only paid for accuracy in the price domain, this is where I should concentrate my efforts.

In fact - in my present systems, I do not define time periods - but rather price "periods" in some fashion and, I have no doubt in my mind that because of this, my price line is much smoother that most ordinary (time based) systems out there.

As such - time plays little/no part in any my live systems.

Comments would be appreciated greatly.

Jdpnz, I agree with you.

It's the ups and downs that pay off. Additionally, in my previous attempts (I don't trade) I've tried to non-dimensionalize the problem, that is thinking in percent changes and having percent targets given past moves in the price rather than core pips.

I could not come up with a way to automatically accurately determine the past turning points though given the jagged behavior of most indicators. Would you have any suggestion?

What do you mean under "my price line is much smoother"?

Thanks.

 

time/price period data

got_fx:
Jdpnz, I agree with you.

What do you mean under "my price line is much smoother"?

Thanks.

easiest answer is to give you a picture.

I used the EUR since 1/12/07 and, for the first graph I used a fixed 10 pip +- previous period's max/min which is still rather noisy (but, should give you some idea.)

The second graph is a fixed 2 hour period of the same data.

In live trading, I use the first graph - although I must admit that I implement all kinds of ideas to vary the 10 pips.

Oh yes - both graphs a close price - however, actual closing time vary greatly between them and as such, a direct comparison is not really possible.

Personally, I find the first dataset useful because:

1.) because each new data period will move by a fixed amount (10 pips in this case) the angle (velocity) of price movement is fixed. The only question then is - will the move be up or down?

2.) as such price has now been taken from an analog signal to a digital signal. (No question as to how many pips the market moved - each 10 pips is a new period.)

3.) I can now describe price movement as a series of one's and zero's, with a 1 representing 10 pips up,and a 0 representing 10 pips down. As an example - a move of 50 pips up and then 25 pips down would become 1111100. In terms of AI (and maybe especially NN's) this makes rather a huge difference.

4.) Because of the fixed velocity it is much easier (and more precise) to set my indicators such that I have the best output at this (10 pips/period) rate. However - there is a tradeoff for this - that which traders normally calls momentum is lost (in the normal sense of the word) because momentum is actually related to price velocity and I keep velocity constant.

Files:
Reason: