"New Neural" is an Open Source neural network engine project for the MetaTrader 5 platform. - page 16

 

I have no way of understanding the essence of the project. For example, what is the neural network engine? And why does it have to be the same for different types of networks? Some networks effectively "move" in one way, others in another. The description of the structure of networks may also be different accordingly. For a simple example, take the solution of linear equations. It is of course possible to solve all types of linear systems by one method - Gauss. But if we know the structure of the coefficient matrix, there are more efficient methods of solution. The same is the problem of training networks. Forward propagation networks are trained by backward error propagation method, echo networks are trained by MNC, etc. Why instead of one engine, create several engines? Why have a team of programmers working on the same thing while trying to reach consensus? Unanimity in this case prevents creativity. Let different programmers write the codes of different networks as a library, with the possibility of calling them from the indicators and advisors. In this case, the project is no different from the existing system of sending the programmers their codes in the code base of the library, accompanied by the article with a detailed description of the network, how it works and examples of its use. Nothing wrong if several programmers independently create codes of the same network. There are dozens of ways to train direct distribution networks. With that kind of approach, instead of wasting a lot of time discussing how to properly describe a network, people would already be starting to create the codes for those networks. For example, I'm very interested to read TheXpert's article on echo networks. But apparently it will not come out soon.

 
gpwr:

I can not understand the essence of the project. For example, what is the neural network engine? And why does it have to be the same for different types of networks?

We want versatility. The gut and the assembly will of course be different. Unification is needed for possible visualization and combining into committees.

gpwr:

For example, I am very interested to read TheXpert's article on echo networks. But apparently it will not come out soon.

Well, in the framework of an Open Source, maybe you can read it :) .

Representation of scales:


 

That's all :)

Presentation of the network:


 

Sample layertemplate:

class LayerFunctional
{
        typedef boost::shared_ptr<Synapse> SynapsePtr;
        typedef std::vector<SynapsePtr> SynapseVector;
        typedef SynapseVector::iterator SynapseIterator;

public:
        typedef boost::shared_ptr<Vector> VectorPtr;
        typedef boost::shared_ptr<IPatternCollection> PatternsPtr;

public:
        LayerFunctional
                (         bool bAdaptiveStep
                        , double step
                        , size_t newSize
                        , bool bTUsed
                );

        void Init(const Initializer& initializer);

        void AddInputSynapse(boost::shared_ptr<Synapse> pSynapse);
        void RemoveInputSynapse(boost::shared_ptr<Synapse> pSynapse);

        void AddOutputSynapse(boost::shared_ptr<Synapse> pSynapse);
        void RemoveOutputSynapse(boost::shared_ptr<Synapse> pSynapse);

        void FwdPropagate();
        void BackPropagate();

        void ClearInputs();
        void ClearErrors();

        const size_t Size() const;

        void SetSize(size_t newSize);

        const bool ThresholdsUsed() const;
        void SetThresholdsUsed(bool used = false);

        const bool AdaptiveStepUsed() const;
        void SetAdaptiveStepUsed(bool used = false);
        const double GetStep() const;

        const VectorPtr GetInputs() const;
        const VectorPtr GetOutputs() const;
        const Vector& GetErrors() const;
        const Vector& GetThresholds() const;

        const PatternsPtr GetInputCollection() const;
        const PatternsPtr GetOutputCollection() const;


        void SetInputCollection(PatternsPtr patterns);
        void SetOutputCollection(PatternsPtr patterns);

        void FeedInputs(size_t patternIndex);
        void CompareOutputs(size_t patternIndex);

        void DoF(Vector& data);
        void DodF(Vector& data);

        void Learn();

        double CountAlpha();

        template<class Archive>
        void save(Archive & ar, const unsigned int version) const
        {
        }

        template<class Archive>
        void load(Archive & ar, const unsigned int version)
        {
        }

private:
        bool m_bAdaptiveStep;

        double m_Step;

        size_t m_Size;
        bool m_bTAreUsed;

        Vector m_vT;
        Vector m_vErrors;
        VectorPtr m_pInputs;
        VectorPtr m_pOutputs;

        SynapseVector m_vInSynapses;
        SynapseVector m_vOutSynapses;

        PatternsPtr m_pInputCollection;
        PatternsPtr m_pOutputCollection;
};

This is an approximation for MLP implementation, most of it fits the universal interface.

m_vInSynapses

A vector of the synapses that make up the layer. These synapses and the layer itself are connected through

m_pInputs

a common buffer. Therefore, a change in the buffer will be immediately visible to both the layer object and the synapses.

Similarly, the output synapses are connected via the output buffer.

 

Synapses:

class Synapse
{
        Synapse()
        {
        }

public:
        typedef boost::shared_ptr<Vector> VectorPtr;

public:
        Synapse(boost::shared_ptr<Vector> inputVector, boost::shared_ptr<Vector> outputVector);
        
        void AssignInput(boost::shared_ptr<Vector> inputVector);
        void AssignOutput(boost::shared_ptr<Vector> outputVector);

        void FwdPropagate();
        void BackPropagate(LayerFunctional* outLayer);

        double operator()(size_t inIdx, size_t outIdx) const;

        const Detail::Slice FromInput(size_t inIdx) const;
        const Detail::Slice ToOutput(size_t outIdx) const;

        size_t GetOutSize() const;
        size_t GetInSize() const;

        const Vector& GetErrors() const;
        void ZeroErrors();

        void Init(const Initializer& initializer);
        void Learn(LayerFunctional* outLayer, const double& step);

        const VectorPtr GetInputs() const;
        const VectorPtr GetOutputs() const;

        void SetValues(const Synapse& other);

private:
        size_t m_InSize;
        size_t m_OutSize;

        Matrix m_vSynapses;
        Vector m_vErrors;

        VectorPtr m_pInBuffer;
        VectorPtr m_pOutBuffer;
};

Synapses have errors, too.

Errors of neurons are for training thresholds, errors of synapses are for training synapses.

And the matrix of weights itself (what is missing here is a matrix of weights availability, which can be set manually) and buffers for communication with the layers.

 

Net:

class Net
{
        typedef boost::shared_ptr<LayerFunctional> LayerPtr;
        typedef boost::shared_ptr<Synapse> SynapsePtr;
        
        typedef std::vector<LayerPtr> LayersVector;
        typedef std::vector<SynapsePtr> SynapsesVector;

public:
        Net();

        void AddLayer(size_t size, bool bTUsed)
        void AddLayer(const LayerPtr& pLayer);

        LayerPtr GetLayer(size_t index) const;
        SynapsePtr GetSynapse(size_t index) const;

        void ConnectLayers(LayerPtr& inLayer, LayerPtr& outLayer);
        size_t GetLayersCount() const;

        size_t GetSynapsesCount() const;

        void Init(const Initializer& initializer);

        void FeedInputs(size_t patternIndex);
        void FwdPropagate();
        void BackPropagate();
        void CountErrors(size_t patternIndex);
        void Learn();

        size_t GetLayerID(const LayerPtr& pLayer) const;

        void save(Archive & ar, const unsigned int version) const
        void load(Archive & ar, const unsigned int version)

private:
        struct Link;

        typedef std::vector<Link> LinksVector;

private:
        size_t m_LayersCount;
        size_t m_SynapsesCount;

        LayersVector m_vLayers;
        SynapsesVector m_vSynapses;
        LinksVector m_vLinks;
};

This is roughly what a network looks like.

 

Construction and use in the simplest test:

void XORTest()
{
        Net net;

        LayerPtr inLayer(new Layer<Functions::LinearFunction>(false, 0.2, 2, false));
        LayerPtr hiddenLayer(new Layer<Functions::BiSigmoidFunction>(false, 0.2, 3, false));
        LayerPtr outLayer(new Layer<Functions::LinearFunction>(false, 0.2, 1, false));

        net.AddLayer(inLayer);
        net.AddLayer(hiddenLayer);
        net.AddLayer(outLayer);

        net.ConnectLayers(inLayer, hiddenLayer);
        net.ConnectLayers(hiddenLayer, outLayer);

        PatternsPtr inPattern(new PatternCollection<>(2));
        // filling patterns

        PatternsPtr outPattern(new PatternCollection<>(1));
        // filling patterns

        inLayer->SetInputCollection(inPattern);
        outLayer->SetOutputCollection(outPattern);

        Initializer initer(0.1);
        net.Init(initer);

        size_t count = 0;
        double Es = 0.0;

        do 
        {
                Es = 0.0;
                for (size_t i = 0; i < 4; ++i)
                {
                        net.FeedInputs(i);
                        net.FwdPropagate();
                        net.CountErrors(i);
                        net.BackPropagate();
                        net.Learn();

                        Vector v(outLayer->Size());
                        v = outLayer->GetErrors();
                        v *= v;

                        Es += v.sum()/2.0;
                }

                ++count;
        } while(Es > 0.0001 && count < 10000);
}

Plus you can make templates for typical configurations.

 
TheXpert:

You have to know at least what to take, what to teach, and how to evaluate. And these things need to be organized by hand.

Exactly. And I don't know. Not only that, there are sets that are very difficult to combine at all. Neuralinks are just a tool. In capable hands (take Leonid at least ) very powerful.

I wonder if he would be willing to give you advice.

In the end it is necessary to avoid the risk of creating a product that fully meets your needs in terms of functionality, but is completely unusable for the other 99% of potential users.

If we have the task to offer a new trading tool to the audience, it should be aimed at everybody, well almost everybody, including those who have just opened the terminal, those who have been using handwheel for years, those who have two higher degrees and those who have a higher computer.

The interface and the product itself should be as simple and understandable as Lego.

 
Mischek:

If the task is to provide a new tool to the audience, it should ideally be designed for everyone, well, almost everyone, and those who first opened the terminal and those who spent years on Quicksilver, and those who have two college degrees and those who have tspsh.

I think it's desirable to have someone competent, but not a programmer.
 
papaklass:
Take a survey among traders. What tasks do they solve when trading? You'll get what most people need.
Is it too much trouble for the traders themselves to come up and write something here?
Reason: