Taking Neural Networks to the next level - page 38

 

Forum on trading, automated trading systems and testing trading strategies

Better NN EA

Sergey Golubev, 2024.09.22 07:41

Neural Networks Made Easy (Part 88): Time-Series Dense Encoder (TiDE)

Neural Networks Made Easy (Part 88): Time-Series Dense Encoder (TiDE)

Probably all known neural network architectures have been studied in terms of their ability to solve time series forecasting problems, including recurrent, convolutional and graph models. The most notable results are demonstrated by models based on the Transformer architecture. Several such algorithms were also presented in this series of articles. However, recent research has shown that Transformer-based architectures might be less powerful than expected. On some time series forecasting benchmarks, simple linear models can show comparable or even better performance. Unfortunately, however, such linear models have shortcomings because they are not suitable for modeling nonlinear relationships between a sequence of time series and time-independent covariates.

 

Forum on trading, automated trading systems and testing trading strategies

Better NN EA

Sergey Golubev, 2024.10.17 16:39

Neural Network in Practice: Least Squares 

Neural Network in Practice: Least Squares

In the previous article Neural Network in Practice: Secant Line, we began to discuss applied mathematics in practice. However, this was only a short and quick introduction to the topic. We have seen that the basic mathematical operation to be used is the trigonometric function. And, contrary to what many think, this is not a tangent function but a secant function. Although this may all seem quite confusing at first, you will soon find that everything is much simpler than it seems. Unlike many people who only create a lot of confusion in the mathematical environment, here everything develops completely naturally.


 

Forum on trading, automated trading systems and testing trading strategies

Better NN EA

Sergey Golubev, 2024.10.25 12:59

Neural networks made easy (Part 89): Frequency Enhanced Decomposition Transformer (FEDformer)

Neural networks made easy (Part 89): Frequency Enhanced Decomposition Transformer (FEDformer)

Long-term forecasting of time series is a long-standing problem in solving various applied problems. Transformer-based models show promising results. However, high computational complexity and memory requirements make it difficult to use the Transformer for modeling long sequences. This has given rise to numerous studies devoted to reducing computational costs of the Transformer algorithm.

Despite the progress made by Transformer-based time series forecasting methods based, in some cases they fail to capture the common features of the time series distribution. The authors of the paper "FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting" have made an attempt to solve this problem. They compare the actual data of a time series with its predicted values obtained from the vanilla Transformer. Below is a screenshot from that paper.


 

Forum on trading, automated trading systems and testing trading strategies

Better NN EA

Sergey Golubev, 2024.10.25 14:13

Neural Network in Practice: Straight Line Function

Neural Network in Practice: Straight Line Function

In the previous article "Neural Network in Practice: Least Squares", we looked at how, in very simple cases, we can find an equation that best describes the data set we are using. The equation that was formed in this system was very simple, it used only one variable. We've already shown how to do the calculation, so we'll get straight to the point here. This is because the mathematics used to create an equation based on the values available in the database requires significant knowledge of analytical mathematics and algebraic computation. In addition to this, of course, it is necessary to know what type of data is in the database we are using.

 

Forum on trading, automated trading systems and testing trading strategies

Better NN EA

Sergey Golubev, 2024.10.31 09:43

Neural Networks Made Easy (Part 90): Frequency Interpolation of Time Series (FITS)

Neural Networks Made Easy (Part 90): Frequency Interpolation of Time Series (FITS)

In the previous articles, we discussed the FEDformer method that uses the frequency domain to find patterns in a time series. However, the Transformer used in that method can hardly be referred to as a lightweight model. Instead of complex models that require large computational costs, the paper "FITS: Modeling Time Series with 10k Parameters" proposes a method for the frequency interpolation of time series (Frequency Interpolation Time Series - FITS). It is a compact and efficient solution for time series analysis and forecasting. FITS uses frequency domain interpolation to expand the window of the analyzed time segment, thus enabling the efficient extraction of temporal features without significant computational overhead.

 

Forum on trading, automated trading systems and testing trading strategies

Better NN EA

Sergey Golubev, 2024.11.03 11:19

Neural Networks Made Easy (Part 91): Frequency Domain Forecasting (FreDF)

Neural Networks Made Easy (Part 91): Frequency Domain Forecasting (FreDF)

Forecasting time series of future prices is critical in various financial market scenarios. Most of the methods that currently exist are based on certain autocorrelation in the data. In other words, we exploit the presence of correlation between time steps that exists both in the input data and in the predicted values.

Among the models gaining popularity are those based on the Transformer architecture that use Self-Attention mechanisms for dynamic autocorrelation estimation. Also, we see an increasing interest in the use of frequency analysis in forecasting models. The representation of the sequence of input data in the frequency domain helps avoid the complexity of describing autocorrelation and improves the efficiency of various models.


 

Forum on trading, automated trading systems and testing trading strategies

Better NN EA

Sergey Golubev, 2024.11.10 08:19

Neural Networks Made Easy (Part 92): Adaptive Forecasting in Frequency and Time Domains

Neural Networks Made Easy (Part 92): Adaptive Forecasting in Frequency and Time Domains

The article presents the results of experiments on eight real data sets, according to which ATFNet shows promising results and outperforms other state-of-the-art time series forecasting methods on many datasets.

 

Forum on trading, automated trading systems and testing trading strategies

Better NN EA

Sergey Golubev, 2024.11.22 09:38

Neural Networks Made Easy (Part 93): Adaptive Forecasting in Frequency and Time Domains (Final Part)

Neural Networks Made Easy (Part 93): Adaptive Forecasting in Frequency and Time Domains (Final Part)

In the previous article, we got acquainted with the ATFNet algorithm, which is an ensemble of 2 time series forecasting models. One of them works in the time domain and constructs predictive values of the studied time series based on the analysis of signal amplitudes. The second model works with the frequency characteristics of the analyzed time series and records its global dependencies, their periodicity and spectrum. Adaptive merging of two independent forecasts, according to the author of the method, generates impressive results.

 

Forum on trading, automated trading systems and testing trading strategies

Better NN EA

Sergey Golubev, 2024.11.28 09:10

Neural Networks Made Easy (Part 94): Optimizing the Input Sequence

Neural Networks Made Easy (Part 94): Optimizing the Input Sequence

A common approach when processing time series is to keep the original arrangement of the time steps intact. It is assumed that the historical order is the most optimal. However, most existing models lack explicit mechanisms to explore the relationships between distant segments within each time series, which may in fact have strong dependencies. For example, models based on convolutional networks (CNN) used for time series learning can only capture patterns within a limited time window. As a result, when analyzing time series in which important patterns span longer time windows, such models have difficulty effectively capturing this information. The use of deep networks allows to increase the size of the receptive field and partially solves the problem. But the number of convolutional layers required to cover the entire sequence may be too large, and oversizing the model leads to the vanishing gradient problem.

 
When training a network or model what can be a target label other than next closing price