"New Neural" is an Open Source neural network engine project for the MetaTrader 5 platform.

 
TheXpert:

The first thing you probably need is a new, simple forum for the project.

I think we need to make a project on sorsforge and move the discussion there straight away.

The first thing we need is brainstorming, we need a database of ideas, albeit surrealistic, and the forum branch (due to its openness) suits the best.

Everyone is welcome to storm, even non-specialists. Storm suggestions are better distinguished from the rest of the text by color.

For example: make a graphical engine for creating a grid, before starting the engine, set the number of layers, then the input window for each layer by the number of neurons, then trend lines that the user adds define connections.

 
TheXpert:

Andrei, do you mind if we make a brainstorming session out of this thread?

Of course I do. That's what this thread is for.
 

Storm Topics:

-- The type of project (the way the user interacts with the project)

-- Networks that will be implemented in the project

-- Pre-processing and everything that goes with it

-- Architecture, interfaces

-- Implementation, connectivity.

-- Testing, debugging.

Forbidden: to criticize any, even the most delusional idea, to flood. The level of competence does not matter. If you are interested and have thoughts, speak up!

Welcome: propose new ideas, develop existing ones.

 
TheXpert:
...

Perhaps I should add:

-- Postprocessing.

In my opinion, it is necessary to pay special attention to the interfaces of individual modules with the expectation of scalability (connection/disconnection of different signals, possibility to create committees of networks, etc.). Hence the need to separate the network itself (the network itself, for example MLP with two hidden layers takes only a few tens of lines, example in the atacha) from the learning algorithm.

Files:
F_MLP.mqh  5 kb
 

normalize vectors, visualize input and output data,

use external constructor Neurosolutions, use DLL with Neurosolutions, niterface ...

interfaces for uploading-unloading data: vectors, weights,

EDNA Evolutionary Design Network Architecture

Selectable-adjustable activation functions, learning algorithms,

 
njel:
Most likely one of the chips is that there won't be a dlloc.
 

...it looks like this:

Type of project: class library,

1) method of interaction - using (extensively documented) API in the code of their systems .

I understand the practical application of the library will be:

  1. Quick writing of your own utilities? (what do you call it) - specify the desired type of network, form its configuration, prescribe what we are going to feed it with (training sample), etc.
    Each such utility can act as a separate application with input parameters. Run this utility in the tester? The output is a trained network as a file(FILE_COMMON)

  2. How to write EAs which may use file(s) of trained nets?

2) the second way of interaction. Application with graphical interface to simplify the creation of a utility to create/learn the network - as far as type parameters, etc. - it's simple. but for different types have their own settings? + how to set in the UI what to feed the network? It's easier to set it in the code. Is it possible to embed the creation of a network generation utility template in the meta-editor, as a neural network creation wizard? The output is the finished code, the only thing left is to specify what is fed to the input of the network?


P.S. above joo was suggested to use "interfaces of individual modules with the expectation of scalability", they just need to be generated in the meta-editor based on the wizard, and the learning algorithm should be finished

 
joo:

Here I propose to outline the goals and objectives of the project. Discuss details and nuances.

If I understood correctly, it is something like FANN, only much broader and more general? Even know how to assess the prospects of such a monster, it's such a layer of immensity) So many details and nuances. Even in the little piece of code posted joo, did not do without sharpening for a particular implementation (the figure 7 in the activation function is likely from this opus). Neural networks consist entirely of such particles. With interest, I will follow the project and try not to get in the way, but so far I have little idea - is it possible in principle by a few enthusiasts ....

 

Modular Neural Networks

You can add the ability to train a large network in parts, in fact it is a training committee, but without the problems of combining networks into one system.
Модулярні нейронні мережі — Вікіпедія
  • uk.wikipedia.org
Модулярна нейронна мережа (англ. ) — група нейронних мереж (які в даному випадку називаються модулями), що керуються певним посередником. Кожна нейронна мережа слугує модулем і оперує окремими входами для вирішення певних підзавдань із групи завдань, які повинна виконати модулярна нейронна мережа. [1] Посередник приймає вихідні сигнали кожного...
 
Figar0:

If I understand correctly, it's something like FANN, but much wider and more general?

Much "broader" and "general". :)

Why else would you bother?

Figar0:

So many details and nuances. Even in the little piece of code posted joo, did not do without sharpening for a particular implementation (the number 7 in the activation function is likely from this opus). Neural networks consist entirely of such parts.

This is exactly where it is not. The digit 7 in the activation function is the scale factor, so that the sigmoid curvature plot falls in the [-1.0;1.0] range.

Further, I propose to use this range for inputs and outputs of all grid types to avoid confusion and provide the same interface. That is why I put 7 there - in anticipation of my future developments.

However, this coefficient can be formed as a variable, then there is a possibility to regulate curvature of sigmoid in all neurons having FA (from logical switching, to simple linear scaling, including intermediate section with nonlinear S-transformation)

Reason: