Better NN EA - page 12

 
biddick:
What are the best neural network algos for signal filtering?Recurrent? PNN or someting else?

 Money...

 

Neural Network

Neural Network: discussion/development threads

  1. Better NN EA development thread with indicators, pdf files and so on.
  2. Better NN EA final thread 
  3. taking NEURAL NETWORKS to the NEXT LEVEL - very interesting thread
  4. Neural Networks thread (good public discussion)
  5. How to build a NN-EA in MT4: usefull thread for developers.
  6. Radial Basis Network (RBN) - As Fit Filter For Price: the thread 

Neural Network: Indicators and systems development

  1. Self-trained MA cross!: development thread for new generation of the indicators
  2. Levenberg-Marquardt algorithm: development thread

Neural Network: EAs

  1. CyberiaTrader EA: discussion thread and EAs' thread.
  2. Self learning expert thread with EAs' files here.
  3. Artificial Intelligence EAs threads: How to "teach" and to use the AI ("neuron") EA thread and Artificial Intelligence  thread
  4. Forex_NN_Expert EA and indicator thread.
  5. SpiNNaker - A Neural Network EA thread

Neural Network: The Books

  1. What to read and where to learn about Machine Learning (10 free books) - the post.

The article

CodeBase

Neural networks made easy (Part 14): Data clustering
Neural networks made easy (Part 14): Data clustering
  • www.mql5.com
It has been more than a year since I published my last article. This is quite a lot time to revise ideas and to develop new approaches. In the new article, I would like to divert from the previously used supervised learning method. This time we will dip into unsupervised learning algorithms. In particular, we will consider one of the clustering algorithms—k-means.
 

The article

Neural Networks Made Easy
Neural Networks Made Easy
  • www.mql5.com
Artificial intelligence is often associated with something fantastically complex and incomprehensible. At the same time, artificial intelligence is increasingly mentioned in everyday life. News about achievements related to the use of neural networks often appear in different media. The purpose of this article is to show that anyone can easily create a neural network and use the AI achievements in trading.
 

Neural networks made easy (Part 61): Optimism issue in offline reinforcement learning

Recently, offline reinforcement learning methods have become widespread, which promises many prospects in solving problems of varying complexity. However, one of the main problems that researchers face is the optimism that can arise while learning. The agent optimizes its strategy based on the data from the training set and gains confidence in its actions. But the training set is quite often not able to cover the entire variety of possible states and transitions of the environment. In a stochastic environment, such confidence turns out to be not entirely justified. In such cases, the agent's optimistic strategy may lead to increased risks and undesirable consequences.
Neural networks made easy (Part 61): Optimism issue in offline reinforcement learning
Neural networks made easy (Part 61): Optimism issue in offline reinforcement learning
  • www.mql5.com
During the offline learning, we optimize the Agent's policy based on the training sample data. The resulting strategy gives the Agent confidence in its actions. However, such optimism is not always justified and can cause increased risks during the model operation. Today we will look at one of the methods to reduce these risks.
 

Neural networks made easy (Part 62): Using Decision Transformer in hierarchical models

Neural networks made easy (Part 62): Using Decision Transformer in hierarchical models

Previously, we considered hierarchical models for solving problems with, so to speak, the classical approach of the Markov process. However, the advantages of using hierarchical approaches also apply to sequence analysis problems. One such algorithm is the Control Transformer presented in the article "Control Transformer: Robot Navigation in Unknown Environments through PRM-Guided Return-Conditioned Sequence Modeling". The method authors position it as a new architecture designed to solve complex control and navigation problems based on reinforcement learning. This method combines modern methods of reinforcement learning, planning and machine learning, which allows us to create adaptive control strategies in a variety of environments.

Control Transformer opens new perspectives for solving complex control problems in robotics, autonomous driving and other fields. I propose to look at the prospects for using this method in solving our trading problems. 

Control Transformer: Robot Navigation in Unknown Environments through PRM-Guided Return-Conditioned Sequence Modeling
Control Transformer: Robot Navigation in Unknown Environments through PRM-Guided Return-Conditioned Sequence Modeling
  • arxiv.org
Learning long-horizon tasks such as navigation has presented difficult challenges for successfully applying reinforcement learning to robotics. From another perspective, under known environments, sampling-based planning can robustly find collision-free paths in environments without learning. In this work, we propose Control Transformer that models return-conditioned sequences from low-level policies guided by a sampling-based Probabilistic Roadmap (PRM) planner. We demonstrate that our framework can solve long-horizon navigation tasks using only local information. We evaluate our approach on partially-observed maze navigation with MuJoCo robots, including Ant, Point, and Humanoid. We show that Control Transformer can successfully navigate through mazes and transfer to unknown environments. Additionally, we apply our method to a differential drive robot (Turtlebot3) and show zero-shot sim2real transfer under noisy observations.
 

Neural networks made easy (Part 63): Unsupervised Pretraining for Decision Transformer (PDT)

Neural networks made easy (Part 63): Unsupervised Pretraining for Decision Transformer (PDT)

PDT jointly learns an embedding space of future trajectory as well as a future prior conditioned only on past information.. By conditioning action prediction on the target future embedding, PDT is endowed with the ability to "reason over the future". This ability is naturally task-independent and can be generalized to different task specifications.

To achieve efficient online fine-tuning in downstream tasks, you can easily adapt the framework to new conditions by associating each future embedding to its return, which is realized by training a reward prediction network for each future embedding.

Neural networks made easy (Part 63): Unsupervised Pretraining for Decision Transformer (PDT)
Neural networks made easy (Part 63): Unsupervised Pretraining for Decision Transformer (PDT)
  • www.mql5.com
We continue to discuss the family of Decision Transformer methods. From previous article, we have already noticed that training the transformer underlying the architecture of these methods is a rather complex task and requires a large labeled dataset for training. In this article we will look at an algorithm for using unlabeled trajectories for preliminary model training.
 

Neural networks made easy (Part 64): ConserWeightive Behavioral Cloning (CWBC) method

The Decision Transformer and all its modifications, which we discussed in recent articles, belong to the methods of Behavior Cloning (BC). We train models to repeat actions from "expert" trajectories depending on the state of the environment and the target outcomes. Thus, we teach the model to imitate the behavior of an expert in the current state of the environment in order to achieve the target.

Neural networks made easy (Part 64): ConserWeightive Behavioral Cloning (CWBC) method
Neural networks made easy (Part 64): ConserWeightive Behavioral Cloning (CWBC) method
  • www.mql5.com
As a result of tests performed in previous articles, we came to the conclusion that the optimality of the trained strategy largely depends on the training set used. In this article, we will get acquainted with a fairly simple yet effective method for selecting trajectories to train models.
 

Neural networks made easy (Part 65): Distance Weighted Supervised Learning (DWSL)

Neural networks made easy (Part 65): Distance Weighted Supervised Learning (DWSL)

Behavior cloning methods, largely based on the principles of supervised learning, show fairly good results. But their main problem remains the search for ideal role models, which are sometimes very difficult to collect. In turn, reinforcement learning methods are able to work with non-optimal raw data. At the same time, they can find suboptimal policies to achieve the goal. However, when searching for an optimal policy, we often encounter an optimization problem that is more relevant in high-dimensional and stochastic environments.

To bridge the gap between these two approaches, a group of scientists proposed the Distance Weighted Supervised Learning (DWSL) method and presented it in the article "Distance Weighted Supervised Learning for Offline Interaction Data". It is an offline supervised learning algorithm for goal-conditioned policy. Theoretically, DWSL converges to an optimal policy with a minimum return boundary at the level of trajectories from the training set. The practical examples in the article demonstrate the superiority of the proposed method over imitation learning and reinforcement learning algorithms. I suggest taking a closer look at this DWSL algorithm. We will evaluate its strengths and weaknesses in solving our practical problems.

Neural networks made easy (Part 65): Distance Weighted Supervised Learning (DWSL)
Neural networks made easy (Part 65): Distance Weighted Supervised Learning (DWSL)
  • www.mql5.com
In this article, we will get acquainted with an interesting algorithm that is built at the intersection of supervised and reinforcement learning methods.
 

Neural networks made easy (Part 66): Exploration problems in offline learning

Neural networks made easy (Part 66): Exploration problems in offline learning

As we move along the series of articles devoted to reinforcement learning methods, we are facing the question related to the balance between environmental exploration and exploitation of learned policies. We have previously considered various methods of stimulating the Agent to explore. But quite often, algorithms that demonstrate excellent results in online learning are not so effective offline. The problem is that for offline mode, information about the environment is limited by the size of the training dataset. Most often, the data selected for model training is narrowly targeted as it is collected within a small subspace of the task. This provides an even more limited idea of the environment. However, in order to find the optimal solution, the Agent needs the most complete understanding of the environment and its patterns. We have earlier noted that learning results often depend on the training dataset.

In this article, we will get acquainted with the Exploratory Data for Offline RL (ExORL) framework, which was presented in the paper "Don't Change the Algorithm, Change the Data: Exploratory Data for Offline Reinforcement Learning". The results presented in that article demonstrate that the correct approach to data collection has a significant impact on the final learning outcomes. This impact is comparable to that of the choice of learning algorithm and model architecture.
Neural networks made easy (Part 66): Exploration problems in offline learning
Neural networks made easy (Part 66): Exploration problems in offline learning
  • www.mql5.com
Models are trained offline using data from a prepared training dataset. While providing certain advantages, its negative side is that information about the environment is greatly compressed to the size of the training dataset. Which, in turn, limits the possibilities of exploration. In this article, we will consider a method that enables the filling of a training dataset with the most diverse data possible.
 

Neural networks made easy (Part 67): Using past experience to solve new tasks

Reinforcement learning is built on maximizing the reward received from the environment during interaction with it. Obviously, the learning process requires constant interaction with the environment. However, situations are different. When solving some tasks, we can encounter various restrictions on such interaction with the environment. A possible solution for such situations is to use offline reinforcement learning algorithms. They allow you to train models on a limited archive of trajectories collected during preliminary interaction with the environment, while it was available.
Neural networks made easy (Part 67): Using past experience to solve new tasks
Neural networks made easy (Part 67): Using past experience to solve new tasks
  • www.mql5.com
In this article, we continue discussing methods for collecting data into a training set. Obviously, the learning process requires constant interaction with the environment. However, situations can be different.
Reason: