Basic types of neural layers

In the previous sections, we got acquainted with the architecture of a fully connected perceptron and constructed our first neural network model. We tested it in various modes, received our first results, and gained our first experience. However, the fully connected neural layers used in the perceptron, despite their merits, also possess certain drawbacks. For instance, a fully connected layer analyzes only the current data without any connection to previously processed data. Thus, each packet of information is analyzed in an informational vacuum. To expand the volume of analyzed data, it is necessary to continually increase the size of the model. Consequently, the expenses for training and operation grow exponentially. A fully connected layer analyzes the entire aggregate as a whole and fails to reveal dependencies between individual elements.

In this chapter, we will explore various architectural solutions for constructing neural layers aimed at overcoming the drawbacks of fully connected layers that we studied earlier. Fully connected neural networks analyze data without considering their context and interconnections, which can lead to insufficient efficiency and an increase in model volume. We will consider the following architectural approaches:

  • Convolutional Neural Networks (CNN): we will delve into their architecture and implementation principles, as well as examine ways to build them using MQL5 and OpenCL. Next, we will explore practical testing of convolutional models aimed at evaluating their performance and efficiency.
  • Recurrent Neural Networks (RNN): the architecture and implementation principles; ways to build LSTM blocks using MQL5 and organize parallel calculations using OpenCL. From this section, you will also learn how to implement RNNs in Python and test them.

Thus, in this chapter, we will study convolutional and recurrent neural networks, their operation and application in practical problems. We will also examine various methods of their construction and optimization.