Quantitative trading - page 27

 

Build your own algos with ADL® by Trading Technologies



Build your own algos with ADL® by Trading Technologies

Andrew Reynolds, the product manager for automated trading tools at Trading Technologies, introduces ADL (Algo Design Lab) as a groundbreaking solution for simplifying the development process of trading algorithms. Prior to ADL, traders interested in creating their own algorithms had to learn coding, which was time-consuming and had a lengthy development cycle. However, ADL revolutionizes the process by providing an intuitive graphical tool that allows traders to design and deploy algorithms without writing a single line of code. This significantly lowers the barrier of entry in terms of technical ability and enables traders to swiftly capitalize on market opportunities. ADL ensures optimal performance by converting the designed algorithms into well-tested code that runs on co-located high-performance servers.

Reynolds proceeds to explain the key features and functionalities of ADL. The ADL canvas serves as the workspace, consisting of a wide array of blocks representing different trading concepts and operations. Traders can easily drag and drop these blocks to create algorithms, and each block has specific properties and can be connected to other blocks to define the desired logic. Group blocks allow encapsulating specific logic and saving them as library blocks for future reuse. To enhance organization, bookmarks can be added, and a search mechanism is available for quick navigation through blocks and sections. ADL incorporates predictive techniques to detect potential block connections, further expediting the development process.

As the presentation continues, the instructor demonstrates the step-by-step creation of algorithms using ADL. The platform offers real-time feedback and user-friendly features to aid in efficient development. The instructor showcases the addition of entry side logic to an algorithm, followed by the incorporation of exit side logic, and finally the creation of an algorithm with both entry and exit side logic. Various blocks such as order blocks, message info extractors, field blocks, and alert blocks are utilized to define the desired functionality of the algorithms. Throughout the demonstration, the instructor highlights the readability and customization options provided by jump blocks, allowing traders to tailor their algorithms according to their preferences.

The instructor then introduces the Order Management Algo (OMA), which enables applying algorithmic logic to existing orders, providing flexibility to manipulate price, quantity, stop price, and disclosed quantity as needed. They explain how the bid drifter strategy can be implemented, incrementally increasing the price in intervals until the order is filled. The instructor emphasizes that ADL is designed to prevent unintended actions and infinite loops, ensuring user safety and expected behavior. Furthermore, ADL incorporates a P&L risk block feature that allows traders to set predefined loss thresholds, automatically stopping the algorithm if losses exceed the specified amount.

The presenters discuss the launch and monitoring of algorithms using ADL. Algol launching can be initiated from various widgets within the front-end Auto Trader algo dashboard, order book, or MD Trader. The one-click launching capability directly from the MD Trader ladder is highlighted, enabling traders to choose instruments and modify algo parameters effortlessly. ADL also provides the ability to select colocation facilities based on the instrument, and traders can monitor the progress of their algorithms directly from the front end. Additionally, the platform supports specifying different accounts for each instrument when launching algorithms, enhancing flexibility and account management options.

The presenters emphasize the availability of resources for learning more about ADL on the Trading Technologies website, including a support forum for discussing ADL-related topics. They inform the audience about the upcoming addition of an analytics block, allowing extraction of historical data and performing built-in studies within ADL. Users will have the capability to build custom studies using historical data directly within the algorithm. The presenters highlight that Trading Technologies is broker-neutral, enabling connection to any broker that supports the platform. Pricing details are also mentioned, and the stacker outputs algorithm type is identified as a common use case.

The speakers delve into the versatility of writing algorithms using ADL, emphasizing that each trader can bring their unique "secret sauce" to algorithmic trading. They recommend the Trading Technologies community forum as an excellent resource for obtaining additional information and insights on popular algorithmic strategies. The advantages of single-click launch with autotraders are explained, allowing traders to model multiple trades simultaneously. They also mention the availability of the ADL dashboard on mobile apps, enabling traders to pause and restart algorithms remotely.

The presentation proceeds with a discussion on accessing the ADL platform through a free demo account on the TradeTT site, providing immediate access and an opportunity to explore the platform's capabilities. It is highlighted that ADL is co-located with major exchanges, offering a pool of servers located in facilities across various locations, including a gen-pop server for users to experiment with different trades. The speakers also touch upon web services and APIs, mentioning the release of the TT REST API and the utility of the ADL platform for forex trading.

Regarding foreign exchange trading options, the speakers clarify that while there are no immediate plans to connect directly with forex exchanges, forex features are available on the CME, and NYSE offers a spot forex contract. They encourage audience members to engage in the forums, which track and address product enhancements. The conclusion includes a preview of the back program and a request for attendees to fill out a survey form before concluding the webinar session.

  • 00:00:00 Andrew Reynolds, the product manager for automated trading tools at trading technologies, introduces ADL as an interactive graphical tool for creating algos that simplifies the development process for traders. Prior to ADL, traders who wanted to develop an algo had to learn how to write code, which was time-consuming and had a lengthy development cycle. However, ADL provides users with an intuitive tool to design and deploy trading algorithms without having to write a single line of code. This lowers the barrier of entry in terms of technical ability and allows traders to rapidly seize opportunities in the market. Moreover, ADL converts into well-tested code that runs on co-located high-performance servers, ensuring the best possible performance.

  • 00:05:00 We learn about the ADL canvas, which consists of a variety of blocks representing different trading concepts or operations that can be dragged out to create algorithms. Each block has properties specific to its function and can be connected to other blocks to represent the desired logic. Group blocks can encapsulate specific logic and be saved as library blocks for reuse in other algos. To make sections easier to find, bookmarks can be added and a search mechanism is available to quickly locate specific blocks or sections. Additionally, EDL employs predictive techniques to detect potential block connections, making development faster.

  • 00:10:00 We develop the algo, we can quickly identify and resolve any errors. The ADL platform also has predictive analytics that help in the development of the algo, such as automatically detecting the type of block being brought in. The classifications of algos were also discussed, namely those with entry side logic, exit side logic, and both entry and exit side logic. An example of creating an algo with entry side logic was demonstrated, using an order block and a field block to extract the bid price for a limit order. The ADL platform provides real-time feedback and user-friendly features to assist in the efficient development of algos.

  • 00:15:00 The instructor demonstrates how to add exit side logic to an algo and create an algo with both entry and exit side logic. An order block is added with a sell limit order and a message info extractor is attached to the fill output port of the initial order. This extractor helps extract information regarding the messages fed through it, such as fills, and extract the fill price and quantity. A field block is also added to extract the minimum tick size, which is added to the fill price to set the hedge order to be one tick greater than the fill price. This price then becomes the price of the sell limit order, completing the algo. Alert blocks are also added to notify the trader of the algo's progress and help them distinguish between multiple algos.

  • 00:20:00 The speaker demonstrates how to improve the readability of an algorithm by using jump blocks in Trading Technologies' ADL®. They enhance a basic scalping algorithm, by adding more variations customized to suit a trader's preferences, with an entry and an exit point. For the exit point, they take out all entry side logic, add an existing order block, and attach it to the single order container. They then connect the fill messages from the D multiplexer to the fill messages of the previous logic to create an algo that can be applied to any working order, which when filled, will automatically place a sell limit order one tick over the fill price at a quantity.

  • 00:25:00 The instructor explains the Order Management Algo (OMA), which applies algo logic to an existing order and can manipulate the price, quantity, stop price, and disclosed quantity as needed. This can be useful for a bid drifter, where logic is added to the ports to increase the price in intervals until the order is filled. The instructor also notes that users can flip the logic if needed and explains how ADL prohibits certain actions, such as attaching the instrument or price to the quantity, as well as logic checking to prevent an infinite loop. ADL is a context-specific language that understands the user's intentions and prevents unexpected behavior.

  • 00:30:00 The speaker discusses how ADL allows developers to protect themselves against P&L losses through its P&L risk block feature, which automatically stops an algo if losses exceed a predetermined amount. This feature is user-defined and can be set for each instance of an algo launched. The algos can be launched from several widgets within the front-end Auto Trader algo dashboard, order book, or MD Trader. The speaker highlights the one-click launching of algos directly from the MD Trader ladder, which allows you to choose the instrument and make changes to algo parameters. ADL also allows users to select colocation facilities based on the instrument and the ability to monitor algos' progress from the front end. It is also possible to set different accounts based on the algo.

  • 00:35:00 The presenters discuss how users can specify different accounts for each instrument when launching their algo using ADL. They also mention the resources available on Trading Technologies' website for learning more about ADL, as well as a support forum for discussing all things ADL. The presentation then moves into a Q&A session where Andrew addresses questions from the audience. One question brought up is about specifying accounts for each instrument, which the presenters had already covered earlier.

  • 00:40:00 The speaker discusses the upcoming addition of an analytics block that will allow users to extract historical data and perform built-in studies in ADL. They can also pull out historical data to build custom studies directly into the algorithm. You can use a value bucket block to store things that you want to look up later and source as many values as you want. The speaker also says that they are broker-neutral, which means that the platform can be connected to any broker that supports it. Finally, the speaker offers information on pricing and mentions that stacker outputs are a common algorithm type.

  • 00:45:00 The speaker discusses the various ways in which algos can be written using ADL, emphasizing that everyone has their version of a "secret sauce". The Trading Technologies community forum is a great resource for getting additional information on popular bug types such as the "stacker". There are many different ways to build a simple order type and the forum is a great place to learn. The speaker also explains the advantages of using single-click launch with autotraders and how it makes it easy to model more than one trade at a time. Additionally, they mention that ADL dashboard is available on their phone apps and allows traders to pause and restart algos while they are away from their desk.

  • 00:50:00 The speaker discusses how the ADL platform can be accessed through a free demo account on the TradeTT site, allowing users to start using and demoing the platform right away. The speaker also mentions that the ADL platform is co-located with major exchanges and offers a pool of servers located in facilities in each location and a gen-pop server for users who want to try out different trades. Furthermore, the speaker talks about the new Analytics Block that ADL will launch in the first half of next year, which will provide historical data and the ability to perform studies on that data. Finally, the speaker touches on web services and APIs, as well as the release of the TT REST API on December 1st, and how the ADL platform can be used for forex trading.

  • 00:55:00 The speaker discusses the availability of foreign exchange trading options on the Trading Technologies platform, noting that currently there are no immediate plans to connect directly with forex exchanges although forex features are available on the CME and a spot forex contract is being offered by NYSE. The speaker also encourages audience members to ask questions in the forums, where product enhancements are tracked and responded to. The audience is directed to visit tryTTnow.com for a free demo of the Trading Technologies platform. The conclusion includes a preview of the back program and the request that attendees fill out a survey form before exiting the webinar session.
Build your own algos with ADL® by Trading Technologies
Build your own algos with ADL® by Trading Technologies
  • 2017.10.28
  • www.youtube.com
Thursday 26th October 20177:30 PM IST | 10:00 AM EST | 10:00 PM SGTLearn how to build, deploy, and launch basic algo using ADLThe webinar covers:- What is AD...
 

Quantitative Finance | Introduction to Machine Learning | Quantiacs | By Eric Hamer



Quantitative Finance | Introduction to Machine Learning | Quantiacs | By Eric Hamer

Eric Hamer, the CTO of Quantiacs, introduces the partnership between Quantiacs and Quantinsti, aiming to democratize the hedge fund industry. This collaboration provides training sessions that equip students with practical skills using Quantiacs' open-source tools and data. Quantiacs functions as a crowd-sourced hedge fund, connecting quantitative analysts who develop algorithms with capital, while Quantinsti offers courses in algorithmic trading. Hamer highlights that participating quants can compete in Quantiacs competitions, where they have the opportunity to win investment capital and a share of the profits.

Hamer delves into how Quantiacs connects coders' algorithms to capital markets, benefiting both the quant and Quantiacs if the strategies prove successful. Quantiacs strives to promote quantitative trading by offering downloadable desktop toolkits for MATLAB and Python, sample trading strategies, and free end-of-day futures data dating back to 1990. They have also incorporated macroeconomic indicators to assist clients in improving their algorithms. Moreover, Quantiacs provides an online platform where users can submit and evaluate their algorithms at no cost. Currently focused on futures, Quantiacs aims to potentially provide comparable data for equity markets in the future.

The speaker explains the two primary functions of trading strategies in the Quantiacs platform: the cost function and the trading system. The cost function accounts for transaction costs and commissions by utilizing 5% of the difference between the high and low prices of a given day. On the other hand, the trading system allows users to request price information and provide a weight vector or matrix that determines portfolio allocation. Quantiacs discourages the use of global variables and offers a settings parameter to maintain necessary state information. Hamer provides an example of a simple trading strategy that has yielded a 2.5% annual return. The strategy's output includes an equity curve, performance of long and short positions, and individual futures performance. Quantiacs evaluates strategies based on positive performance, low volatility, and the Sharpe ratio, which measures risk-adjusted returns.

The concept of machine learning and its applications in quantitative finance are introduced by Hamer. He highlights that a significant portion of trades on American stock exchanges, approximately 85% to 90%, are computer-generated. Machine learning techniques such as regression, classification, and clustering are becoming increasingly prevalent in the field. Hamer discusses some pitfalls associated with machine learning, emphasizing the importance of maximizing risk-adjusted returns without excessive trading. While neural networks can yield excellent results, their execution times can be lengthy, and traditional CPU architecture may not be optimal. However, high-performing GPUs are available, significantly reducing execution time. Although open-source libraries like Python and MATLAB exist, setting up and training a machine learning algorithm can be a complex process requiring effort and dedication.

Hamer delves into the process of machine learning, starting with specifying the problem statement and identifying the type of machine learning problem. He explains the requirement for numeric data in machine learning and discusses the division of data into training and test sets for model training and evaluation, respectively. Hamer provides an example demonstrating how the Quantiacs Python API can be utilized to make predictions on the mini S&P 500 futures contract, displaying the results using the Keras neural network API.

The limitations of the machine learning model created for predicting future stock prices are discussed by Hamer. While the model may initially appear to accurately predict prices, closer inspection reveals that it is merely using today's data as a proxy for tomorrow's data. When applying the same algorithm to raw data returns, the model's predictions follow a similar shape but not the same magnitude as the true values. Hamer demonstrates the poor performance of the model when applied to trading data and explores potential avenues for improvement. He also provides a brief overview of the source code used in his trading system function.

Hamer proceeds to demonstrate the creation of a sequential Keras model for predicting S&P 500 futures returns. The model begins with a basic structure and incorporates specific layers. Hamer trains the model using training data, which comprises actual price data, while the y-values represent the return data to be predicted. Once trained, Hamer can extract the model from the settings and use it to predict returns based on the most recent data. While his simple S&P 500 mini model does not perform well, Hamer explains that proper techniques and optimizations such as gradient descent and boosting can solve the problem.

Techniques to enhance the validity of a machine learning algorithm in quantitative finance are discussed by Hamer. He suggests using the bootstrap aggregating technique, which involves running the algorithm on multiple subsets of the data to gain insights. Keeping strategies simple, utilizing multiple predictions to reach consensus, and being cautious of overfitting, data cleaning, and handling missing data and random variables are also recommended. Hamer believes that machine learning and artificial intelligence will continue to be crucial tools for forecasting financial markets.

The speaker introduces the EpAT and ConTA courses, both offering dedicated sessions on machine learning. EpAT caters to professionals seeking growth in the algo or quantitative trading field, while ConTA provides a self-paced course on implementing regression techniques using machine learning with Python. Hamer responds to questions regarding the choice between R and Python for machine learning and offers advice on avoiding overfitting when testing alternative datasets. He suggests training the model on both training and testing data and examining the difference in error between the two sets to prevent overfitting.

Hamer highlights the dangers of overfitting in machine learning for algo trading and suggests employing the bootstrap aggregation or bagging technique to split a dataset into smaller subsets for accuracy testing. Due to the noise and fluctuations in financial data, anything above 50% accuracy can be considered good.

Finally, Hamer emphasizes the significance of understanding technology to automate trading strategies. He stresses the need for education programs that provide training in the diverse skills required to succeed as an algorithmic trader.

  • 00:00:00 Eric Hamer, the CTO of Quantiacs, introduces the partnership between Quantiacs and Quantinsti, which aims to democratize the hedge fund industry by providing training sessions that allow students to gain practical skills using Quantiacs open-source tools and data. Quantiacs is a crowd-sourced hedge fund that connects quants who develop algorithms with capital, while Quantinsti offers courses in algorithmic trading. Hamer also highlights how Quants can participate in Quantiacs competitions to win investment capital and a portion of the profits.

  • 00:05:00 Eric Hamer of Quantiacs discusses how they connect coders' algorithms to capital markets, with both the quant and Quantiacs benefiting if the strategies are successful. Quantiacs aims to promote quantitative trading, offering downloadable desktop tool kits for MATLAB and Python, sample trading strategies, and free end-of-day futures data dating back to as early as 1990. Additionally, Quantiacs has added macroeconomic indicators to help clients improve their algorithms, and an online platform where users can submit and evaluate their algorithms for free. While currently only working with futures, Quantiacs may provide comparable data for equity markets in the future.

  • 00:10:00 The speaker explains the two main functions of trading strategies in the Quantiacs platform, which are the cost function and trading system. The cost function accounts for transaction costs and commissions by using 5% of the difference between the high and low prices of a given day. On the other hand, the trading system allows the user to request price information and pass back a weight vector or matrix that determines portfolio allocation. The platform discourages the use of global variables and provides a settings parameter for maintaining any necessary state information. The speaker then shows the output of a simple trading strategy, which has provided a 2.5% return per year and includes an equity curve, performance of long and short positions, and individual futures performance. Lastly, the platform evaluates strategies
    based on positive performance, low volatility, and the Sharpe ratio, which measures risk-adjusted returns.

  • 00:15:00 Eric Hamer introduces the concept of machine learning and its applications in quantitative finance. He mentions that 85% to 90% of trades on American stock exchanges are computer-generated, and that machine learning techniques like regression, classification, and clustering are becoming increasingly common. Hamer explains some of the pitfalls of machine learning and highlights the importance of maximizing the risk-adjusted return without excessive churning. While using neural networks can lead to extremely good results, execution times can be lengthy, and the traditional CPU architecture is not optimal. However, there are high-performing GPUs available that can significantly cut execution time. Despite the available open source libraries like Python and MATLAB, setting up and training a machine learning algorithm can be a complicated process that requires effort and work.

  • 00:20:00 Eric Hamer discusses the process of machine learning, starting with specifying the problem statement and identifying the type of machine learning problem. Hamer explains that everything must be numeric in machine learning and that the data set is typically divided into training and test data to train and evaluate the model respectively. Hamer also uses an example to explain how the Quantiacs Python API can be used to make predictions on the mini S&P 500 futures contract and display the results using the Keras neural network API.

  • 00:25:00 Eric Hamer discusses the limitations of the machine learning model he has created for predicting future stock prices. While the model appears to predict prices accurately at first glance, closer inspection reveals that it is actually just using today's data as a proxy for tomorrow's data. When the same algorithm is applied to raw data returns, the model's predictions follow the same shape but not the same magnitude as the true values. Hamer then demonstrates the poor performance of the model when applied to trading data and discusses potential avenues for improvement. He also provides a brief overview of the source code used in his trading system function.

  • 00:30:00 Eric Hamer demonstrates how to create a sequential Keras model to predict S&P 500 futures' returns. The model starts with a bare-bones model and adds specific layers. Eric then trains his model with the training data, which is the actual price data, and the y-values are the return data that he hopes to predict. Once the model is trained, Eric is then able to pull his model out of the settings and use it to predict what the returns will be based on the most recent data. Eric's simple S&P 500 mini model does not work well, but he explains how the problem is solvable with proper technique and optimization such as gradient descent and boosting.

  • 00:35:00 Eric Hamer discusses some techniques that can be used to increase the validity of a machine learning algorithm applied to quantitative finance, such as the bootstrap aggregating technique, which involves running the algorithm on many different chopped-up versions of the data to see what can be learned from it. He advises keeping the strategies simple and using multiple predictions in order to reach consensus, as well as being careful with overfitting, cleaning the data, and accounting for missing data and random variables. Overall, he believes that machine learning and artificial intelligence will continue to be key tools in forecasting financial markets.

  • 00:40:00 The speaker introduces the EpAT and ConTA courses, both of which offer dedicated sessions on machine learning. EpAT is designed for professionals looking to grow in the algo or quantitative trading field, and ConTA offers a self-paced course on implementing regression techniques using machine learning with Python. The speaker also answers questions about choosing between R and Python for machine learning and how to avoid overfitting when testing alternative datasets. The speaker recommends training the model on both training and testing data and looking at the difference in the error between the two to avoid overfitting.

  • 00:45:00 Eric Hamer discusses the pitfalls of overfitting in machine learning for algo trading and suggests using the bootstrap aggregation or bagging technique to split up a data set into smaller subsets to test accuracy. He also notes that anything over 50% accuracy can be considered good in financial data due to its noise and fluctuation.

  • 00:50:00 Eric Hamer emphasizes the importance of understanding technology in order to automate trading strategies. He mentions the need for education programs that can train people on the various skills required to be a successful algorithmic trader.
Quantitative Finance | Introduction to Machine Learning | Quantiacs | By Eric Hamer
Quantitative Finance | Introduction to Machine Learning | Quantiacs | By Eric Hamer
  • 2017.06.16
  • www.youtube.com
This exciting tutorial will take you through the basics of machine learning. It will cover the cool features of the Quantiacs toolkit, and illustrate how to ...
 

Can we use Mixture Models to Predict Market Bottoms? by Brian Christopher - 25th April 2017



Can we use Mixture Models to Predict Market Bottoms? by Brian Christopher - 25th April 2017

Brian Christopher, a quantitative researcher and Python developer, delivers a comprehensive presentation on the limitations of traditional time series analysis and introduces mixture models, specifically hidden Markov models (HMMs), as a promising alternative for predicting returns and identifying market regimes. He emphasizes the need for models that can handle non-stationary data and approximate nonlinear distributions, which are essential in financial forecasting.

Christopher explores how mixture models, particularly HMMs, can be used to estimate an asset's most likely regime, along with the associated means and variances for each regime. He explains the computational process, which involves alternating between computing class parameters and evaluating likelihood data. The Gaussian mixture model (GMM), a well-known mixture model, assumes that each regime follows a Gaussian distribution. Christopher demonstrates how the expectation-maximization algorithm is employed to calculate probabilities and regime parameters until convergence. To illustrate this, he showcases an example of classifying a spy ETF's low volatility, neutral, and high volatility regimes.

Next, Christopher delves into how GMMs can handle non-stationary and nonlinear datasets, overcoming the limitations of traditional time series analysis. He presents a toy strategy that utilizes four factors, including asset returns and the US treasury ten-year to three-month spread, to estimate sequence returns and parameters. GMMs are used to fit and predict, extracting the last regime label's estimate to determine the specific regime's mean and variance. Instead of assuming a normal distribution, the Johnson su distribution is utilized as part of the strategy to account for the nonlinear nature of the data.

The speaker discusses a strategy for predicting market bottoms based on the assumption that returns outside of confidence intervals are outliers. By constructing 99% confidence intervals through a thousand samples, returns below the lower confidence interval are considered outliers. Christopher analyzes the returns after the outlier event, assuming a long-only or buy position in the ETF for a specified number of days. The model adapts to changing volatility, and while the overall accuracy is around 73%, the equity curve does not perform as well as a buy-and-hold strategy. Christopher encourages the audience to explore the data themselves, as the datasets used in the presentation are available on GitHub.

Christopher shares his analysis of using mixture models to predict market bottoms for various ETFs. He examines the distribution of median returns for each ETF across different look-back and holding time periods. SPY, Triple Q, and TLT consistently outperform in different dimensions, while GLD, EFA, and EEM exhibit more symmetric distributions. He also evaluates the sum ratio, which measures the total returns of events greater than 0 divided by returns less than 0, considering values greater than 1 as successful. SPY, Triple Q, and TLT show strong performance across multiple dimensions and look-back periods. However, Christopher cautions that longer holding periods may be more influenced by the overall market trend.

The presenter discusses the performance of different assets in the market using mixture models to predict market bottoms. The study reveals that assets such as SPY, Triple Q, TLT, and GLD perform well depending on variables such as the number of steps or the look-back period. However, the performance of certain assets deteriorates with longer holding periods. The study evaluates median returns across different components and identifies promising results for assets like EEM and Aoife. The importance of proper sampling distribution is emphasized, and the use of the Johnson su distribution is shown to be effective. Overall, the strategy utilizing mixture models to predict market bottoms proves to be compelling.

Christopher explains that while GMM has consistently shown success with assets like SPY, Triple Q, and TLT, there are alternative strategies that perform equally or better. He briefly discusses the code for the model runner class and the run model convenience function, which implements the GMM components. He emphasizes that the model was implemented in a walk-forward fashion to avoid look-ahead bias. Additionally, Christopher provides the data he used in HDF5 format on GitHub.

The speaker explains how to organize and analyze the outputted data to assess the effectiveness of the mixture model strategy. Various slicing and grouping techniques can be employed to evaluate metrics and means. The Johnson su distribution is used to adapt to changing volatility in the return series and is compared to the normal distribution. Christopher suggests that the accuracy of the normal distribution is poor and that it may be more beneficial to simply hold the market. However, he encourages individuals to explore the data on GitHub and offers to address any questions or participate in a webinar.

During the Q&A session, Christopher answers audience questions regarding his webinar on using mixture models to predict market bottoms. He clarifies that he determined the shape parameters for the Johnson distribution through a coarse parameter search and did not extensively research the results. He also discusses how he selected helpful factors for his model, highlighting the inclusion of US-based interests or fixed income metrics to enhance the model's success in predicting US-based asset returns.

Christopher addresses additional audience questions regarding the application of GMM to returns instead of price, the issue of scale when using price, the bias-variance problem with multiple factors, and the similarity between look-back and back-testing. He suggests further exploration and research on combinations of factors that are more predictive across a wider range of assets. He also emphasizes the importance of setting a natural limit to the number of GMM components to avoid overfitting. Christopher invites the audience to reach out to him for further questions and details.

  • 00:00:00 Brian Christopher, a quantitative researcher and Python developer, discusses the limitations of traditional time series analysis when predicting returns or timing the market due to the strict requirement of stationary data and the need for a model that can approximate nonlinear distributions. He then explores the use of mixture models, specifically hidden Markov models (HMMs), which are built on several established concepts such as Markov models and can be used to approximate nonlinear distributions and do not require stationary data.

  • 00:05:00 Brian Christopher discussed how using mixture models can help predict market bottoms and estimate an asset's most likely regime, including the associated means and variances for each regime. The model rotates between computing class parameters and evaluating likelihood data given each parameter, including each regime's mean and variance and probability of transition between them. The most well-known model is the Gaussian mixture model which assumes that each regime is generated by a Gaussian process and uses the expectation-maximization algorithm to calculate the probabilities and regime parameters until convergence or another stopping criteria is met. Brian showed an example of using the model to classify a spy ETF's low volatility, neutral, and high volatility regimes.

  • 00:10:00 Brian Christopher explains how Gaussian mixture models (GMMs) can handle non-stationary datasets and approximate non-linear data sets, overcoming some of the weaknesses of traditional time series analysis models. Christopher designs a toy strategy that uses four factors to estimate the sequence of returns and parameters, including asset returns, the US treasury ten-year to three-month spread, and more. The approach uses GMMs to fit and predict, extracting the estimate of the last regime label to get the model estimate of the mean and variance for that specific regime, which is fed to the Johnson s u distribution, instead of the normal distribution, as part of the strategy.

  • 00:15:00 The speaker discusses a strategy that assumes any actual returns that are outside of the confidence intervals are outliers and predicts market bottoms based on this assumption. They draw a thousand samples to construct 99% confidence intervals and assume that returns below the lower confidence interval are outliers. They then look at the returns after the outlier event, assuming a long-only or buy of the ETF for a number of days. The model adapts to changing volatility, and the accuracy of the model overall is around 73%, but the equity curve leaves a bit to be desired, especially compared to a buy-and-hold strategy. The speaker encourages people to play with the data themselves, as he made the data sets available on GitHub, and they can evaluate each ETF individually or collectively.

  • 00:20:00 Brian Christopher discusses his analysis of ETFs using mixture models to predict market bottoms. He looked at the distribution of median returns for each ETF across various look-back and holding time periods. SPY, Triple Q, and TLT outperformed across all dimensions, while GLD, EFA, and EEM had a more symmetric distribution. He also looked at the sum ratio, which sums up the total returns of every event greater than 0 divided by the sum of returns less than 0, and found that values greater than 1 were considered successful. SPY, Triple Q, and TLT outperformed across multiple dimensions and look-back periods. However, Christopher cautions that longer holding periods may be more affected by the general trend of the market.

  • 00:25:00 The speaker discusses the performance of different assets in the market using mixture models to predict market bottoms. The study found that assets such as SPY, Triple Q, TLT, and GLD perform well depending on the variables, such as the number of steps or the look-back period. The performance of certain assets degrades with longer hold periods. The study evaluated the median returns across different components and found promising results for assets such as EEM and Aoife. The study also highlights the importance of proper sampling distribution, and the use of the Johnson su distribution is shown to be effective. Overall, the strategy using mixture models to predict market bottoms is found to be compelling.

  • 00:30:00 The presenter explains that the Gaussian Mixture Model (GMM) is a framework for asset or return distribution predictions that has shown consistent success with SPY, Triple Q, and TLT. However, some strategies have performed equally as well or better, and expectations need to be tempered accordingly. The presenter then briefly goes over the code for the model runner class and the convenience function called run model, which implements the GMM in components. The presenter emphasizes that the model was implemented in a walk-forward fashion to ensure there was no look-ahead bias involved. Additionally, the presenter made the data he used available on Github in HDF5 format.

  • 00:35:00 The speaker discusses how to organize and analyze the outputted data to determine the effectiveness of the mixture model strategy. The data can be sliced and grouped in various ways to evaluate the metrics and the means. The Johnson su distribution is used to adapt to changing volatility in the return series and is compared to the normal distribution. The speaker suggests that the accuracy of the normal distribution is bad and it might be better to just hold the market. However, the speaker encourages exploration of the data on github and is willing to answer any questions or participate in a webinar.

  • 00:40:00 Brian Christopher answers some questions from the audience about his webinar on using mixture models to predict market bottoms. He explains that he determined the shape parameters for the Johnson distribution through a coarse parameter search and did not extensively research the results. Christopher also discusses how he determined if the factors he selected were helpful in his model, explaining that he tried many different factors and ultimately found that using US-based interests or fixed income metrics helped to make his model more successful in predicting US-based asset returns.

  • 00:45:00 Brian Christopher answers some questions from the audience about why he applied GMM to returns instead of price, the issue of scale when using price, the possible bias-variance problem on K factors, and the similarity of using look-back to back-testing. He also suggests further exploration and research on combinations of factors that are more predictive across a wider range of assets and setting a natural limit to the number of GMM components to avoid overfitting. Brian Christopher invites the audience to contact him for further questions and details.
Can we use Mixture Models to Predict Market Bottoms? by Brian Christopher - 25th April 2017
Can we use Mixture Models to Predict Market Bottoms? by Brian Christopher - 25th April 2017
  • 2017.04.26
  • www.youtube.com
Date and Time:Tuesday, April 25th, 20178:00 PM IST | 09:30 AM CST | 8:30 AM MST This session explains and illustrated the use of Mixture Models with a sample...
 

Implied Volatility From Theory to Practice by Arnav Sheth - 7 March, 2017



Implied Volatility From Theory to Practice by Arnav Sheth - 7 March, 2017

Arnav Sheth, an esteemed professor with extensive knowledge of volatility, takes the stage as the speaker of a webinar titled "Implied Volatility From Theory to Practice." The host introduces Sheth, highlighting his expertise in the field, including his book publication and founding of a consultancy and analytical platform. The webinar aims to provide attendees with a comprehensive understanding of implied volatility, different types of volatility, trading strategies exploiting implied volatility, and available online resources and Chicago Board Options Exchange (CBOE) indexes for further exploration.

Sheth begins by offering a concise overview of options, covering various volatilities such as historical and implied volatility. He dives into one trading strategy in detail and discusses a couple of CBOE indexes, providing practical insights into their application. To provide a historical context, Sheth shares the origins of options, tracing back to the first recorded options contract around 500 BC. He recounts the story of Thales, a mathematician and philosopher, who secured exclusive rights to all olive presses during a bountiful harvest. This tale illustrates the early manifestation of options trading.

Moving into the modern definition of options, Sheth clarifies the concept of call options, describing them as contracts that allow speculation or hedging on the future of an underlying asset. He emphasizes that call options provide the recipient with the right, but not the obligation, to exit the contract. Sheth proceeds to explain the basics of call and put options trading, highlighting that a call option grants the buyer the right to buy an underlying asset at a specified price, while a put option gives the buyer the right to sell the underlying asset at a predetermined price. He underscores that options trading is a zero-sum game, meaning that for every winner, there is a loser, resulting in total profits and losses equating to zero. Sheth warns about the risks of selling a call option without owning the underlying stock but notes that if one owns the stock, selling a call can help mitigate risk.

Sheth delves further into option contracts, covering long call, short call, long put, and short put options. He explains their potential profit and loss outcomes, cautioning against engaging in "naked options" trading for beginners. Moreover, he emphasizes the significance of accounting for the time value of money when calculating profit versus payoff. Sheth distinguishes between European and American options, clarifying that European options can only be exercised at expiration, while American options can be exercised at any time. He concludes this section by introducing the Black-Scholes-Merton pricing model, which he likens to a "levered stock purchase."

The focus then shifts to the Black-Scholes-Merton (BSM) model and its underlying assumptions. Sheth highlights one of these assumptions, stating that the volatility of returns is known and remains constant throughout the option's lifespan. He proceeds to discuss historical volatility, which represents the standard deviation of historical asset returns. Sheth explains its importance in predicting the potential profitability of an option, highlighting that higher volatility increases the option price due to a greater probability of the asset ending up "in the money."

Next, Sheth explores implied volatility and its role in reverse-engineering volatility from the Black-Scholes model using market options. Implied volatility is interpreted as the market's expected volatility and is calculated based on market option prices. Sheth introduces the VIX, which utilizes 30-day maturity at-the-money S&P 500 options to estimate implied volatility. The VIX measures the volatility that the market anticipates during the option's expiration period. He notes that traders often use implied volatility, derived from option prices, to price options rather than the other way around. Sheth emphasizes that if different strikes are associated with the same underlying asset, their implied volatility should remain constant.

Sheth proceeds to explain the concept of volatility skew in options pricing. He demonstrates how implied volatility deviates from historical volatility as the strike price diverges, resulting in the volatility skew. Sheth highlights that the skew emerged after 1987 and presents an opportunity for traders, as it is reflected in options prices. He introduces the term "volatility risk premium," which represents the difference between implied and realized volatility. This premium can be exploited in trading strategies. Sheth clarifies that while the Black-Scholes model is primarily used to price options, it is more commonly utilized to obtain implied volatility.

The calculation of implied volatility in the options market becomes the next topic of discussion. Sheth explains how traders utilize market values of specific options on underlying assets and input these values into the Black-Scholes model to reverse engineer volatility. Implied volatility is then interpreted as the expected volatility by options markets for a specified period, often 30 days. Sheth introduces the concept of the volatility risk premium, showcasing how options markets tend to overestimate actual volatility. He concludes this section by presenting a frequency distribution of the volatility premium.

The speaker delves into trading strategies based on implied volatility, focusing on the concept of selling straddles. Sheth highlights that implied volatility is typically higher than realized volatility, resulting in overpriced options. As a result, the strategy involves selling straddles and going short on volatility. To assess the risks associated with these strategies, Sheth introduces Greek measurements, which provide a framework for evaluating risk. He offers an example scenario involving the purchase of an at-the-money straddle and discusses the profit and loss outcomes based on the underlying stock price. Sheth concludes by cautioning that if the stock price fluctuates significantly, options pricing may no longer be sensitive to volatility.

The video proceeds to discuss the use of options as a hedge against changes in stock prices. Sheth explains that by simultaneously purchasing a call and a put, or selling both, closest to the value of the stock price, delta neutrality can be achieved, but vega cannot be fully hedged. Sheth then introduces CBOE indexes as a convenient way to capitalize on the volatility premium, specifically mentioning the BXM (BuyWrite Monthly) index, which involves a covered call strategy, and the BFLY iron butterfly option. He explains that writing covered calls on the owned stock can reduce the risk associated with solely holding the underlying stock, but it also carries the possibility of losing the stock if it is called. Lastly, Sheth explains the strategy of the iron butterfly, which entails buying and selling four options with three strikes against the S&P 500.

Towards the end of the webinar, Sheth presents a strategy involving the purchase of an out-of-the-money put and an out-of-the-money call. This strategy results in a short volatility position similar to a reverse straddle, but with slightly exaggerated payoff to increase profit potential.

  • 00:00:00 The speaker Arnav Sheth is introduced as a professor who will be leading a webinar on Implied Volatility From Theory to Practice. He covers the different types of volatility, how to use implied volatility, trading strategies to exploit the characteristics of implied volatility, as well as available online resources and Chicago Board Options Exchange indexes to help attendees get started. The session is being recorded, and questions can be asked through the question and answer window. The speaker is introduced as a professor with extensive knowledge of volatility who has published a book and founded a consultancy and analytical platform.

  • 00:05:00 The speaker begins by providing a brief overview of the basics of options, including the different kinds of volatilities, such as historical and implied volatility. They then introduce one trading strategy in detail and a couple of the CBOE indexes. The speaker also talks about the history of options, starting with the first recorded options contract that dates back to around 500 BC, by mathematician and philosopher Thales who booked up all the olive presses during a bumper harvest. The speaker then goes on to define what a call option is in modern times, explaining that it is a contract that allows one to speculate or hedge on the future of an underlying asset, specifically giving the recipient the right but not obligation to exit.

  • 00:10:00 The speaker explains the basics of call and put options trading. A call option gives the buyer the right, but not the obligation, to buy an underlying asset such as a stock at a specified price, while a put option gives the buyer the right but not the obligation, to sell the underlying asset at a specified price. The speaker notes that options trading is a zero-sum game, meaning for every winner there is a loser and that the total profits and losses always equal zero. Additionally, selling a call without owning the underlying stock is very dangerous, but if you own the underlying stock, selling a call can reduce your risk.

  • 00:15:00 Arnav Sheth discusses the different types of option contracts, including long call, short call, long put, and short put, and their potential profit and loss outcomes. He warns against starting out with "naked options" and stresses the importance of accounting for the time value of money when calculating profit versus payoff. Sheth also clarifies the difference between European and American options, stating that European options can only be exercised at expiration while American options can be exercised at any time. Finally, he covers the Black-Scholes-Merton pricing model for options, which he describes as a "levered stock purchase."

  • 00:20:00 The speaker introduces the Black-Scholes-Merton (BSM) model and its assumptions, one of which being that the volatility of returns is known and constant throughout the life of the option. He then focuses on historical volatility, which is the standard deviation of historical asset returns, and its importance in predicting the potential profitability of an option. Greater volatility indicates a greater option price because there is a greater probability that the asset will end up in the money, resulting in a large potential payoff.

  • 00:25:00 The speaker discusses implied volatility and how it is used to reverse engineer volatility from the Black-Scholes model using market options. Implied volatility is interpreted as the market expected volatility and is calculated through the input of the market option price. The VIX, calculated off of 30-day maturity at-the-money S&P 500 options, is the best estimate of implied volatility and measures the volatility that the market expects over the period of time that an option expires. Traders often use implied volatility calculated through option prices to price options, rather than the other way around. The implied volatility should be constant across all different strikes if they are talking about the same underlying asset.

  • 00:30:00 Arnav Sheth explains the volatility skew in options pricing. He shows that the implied volatility deviates from the historical volatility as we move away from the strike price, and this is known as the volatility skew. The skew appears only after 1987, and this becomes an opportunity for traders as it is also reflected in options prices. The difference between implied and realized volatility is called the volatility risk premium, which can be exploited in trading strategies. Sheth explains that the Black Scholes model is used to price options, but it’s used more often to get implied volatility.

  • 00:35:00 Arnav Sheth explains how traders calculate implied volatility in an options market. Traders use the market value of specific options on underlying assets and input all five values into the Black Scholes model to reverse engineer the volatility. Implied volatility is then interpreted as the volatility expected by options markets for the next specified period, usually 30 days. The concept of the volatility risk premium is introduced, which is the difference between the implied volatility and what actual volatility turns out to be, and it is shown that by and large, options markets tend to overestimate what actual volatility is going to be. This section ends with the frequency distribution of the volatility premium.

  • 00:40:00 The speaker discusses trading strategies based on implied volatility and the concept of selling straddles. The speaker explains that implied volatility is typically greater than realized volatility and this results in overpriced options. Therefore, the strategy is to sell straddles and go short volatility. The speaker also introduces the concept of Greek measurements to assess the risks involved in these strategies. The speaker provides an example scenario of buying an at-the-money straddle and discusses the profit and loss outcomes based on the underlying stock price. The speaker concludes by highlighting the risk of options pricing no longer being sensitive to volatility if the stock price fluctuates a lot.

  • 00:45:00 The video discusses the use of options to hedge against changes in stock prices. By simultaneously purchasing a call and a put or selling a call and put at the money closest to the value of the stock price, you can achieve delta neutrality but cannot hedge away vega. The video then moves on to explain CBOE indexes as an easy way to take advantage of the volatility premium, specifically the BXM and a covered call and the BFLY iron butterfly option. Writing covered calls on the stock that you own can reduce the risk of solely holding the underlying stock, but one must be prepared to lose the stock if it is called. Finally, the video explains the strategy of the iron butterfly, which involves buying and selling four options with three strikes against the S&P 500.

  • 00:50:00 The speaker illustrates a strategy that involves buying an out of the money put and another money call, which leads to a short volatility position similar to a reverse straddle. However, the payoff is slightly exaggerated to increase profit.
Implied Volatility From Theory to Practice by Arnav Sheth - 7 March, 2017
Implied Volatility From Theory to Practice by Arnav Sheth - 7 March, 2017
  • 2017.03.08
  • www.youtube.com
Date and Time: Tuesday, March 7, 2017 (9:30 PM IST | 8:00 AM PST)Volatility is a cornerstone concept in options trading, and all traders have a theory of how...
 

How to Use Financial Market Data for Fundamental and Quantitative Analysis - 21st Feb 2017



How to Use Financial Market Data for Fundamental and Quantitative Analysis - 21st Feb 2017

Speakers:

  • Deepak Shenoy (Founder and CEO, Capitalmind)
  • Maxime Fages (Founder, Golden Compass Quantitative Research)
  • Marco Nicolás Dibo (CEO, Quanticko Trading)

Learn to trade fundamentals profitably, understand the challenges surrounding High-frequency data analysis, discover the opportunities and gotchas in Futures trading, and view a live demonstration of a step-by-step tutorial on one of the most popular trading strategies, the Pairs trading strategy!

How to Use Financial Market Data for Fundamental and Quantitative Analysis - 21st Feb 2017
How to Use Financial Market Data for Fundamental and Quantitative Analysis - 21st Feb 2017
  • 2017.02.22
  • www.youtube.com
Date and Time:Tuesday, February 21, 20177:00 PM IST | 9:30 PM SGT | 10:30 AM ARTSpeakers:- Deepak Shenoy (Founder and CEO, Capitalmind)- Maxime Fages (Found...
 

Informative Session on Algorithmic Trading



Informative Session on Algorithmic Trading

In the opening of the informative session on algorithmic trading, the speaker expresses gratitude for the growing interest in this domain and acknowledges the significant impact it has had over the years. They introduce Nitesh, the co-founder of IH and Quant Institute, as the speaker for the session. Nitesh is described as having rich experience in financial markets and will provide an overview of algorithmic trading, trends, and opportunities, particularly for beginners. The speaker highlights recent news articles that demonstrate the increasing popularity of algorithmic trading and its projected growth rate of over 10% CAGR globally in the next five years.

The speaker dives into the growth and opportunities in algorithmic trading, emphasizing its rapid expansion with double-digit percentage numbers worldwide. They present data from different exchanges, showcasing the increasing volumes of algorithmic trading in equity and commodity markets. To define algorithmic trading, they explain it as the process of using computers programmed with a defined set of instructions to place trading orders at high speed and frequency, aiming to generate profits. The critical role of technology in algorithmic trading is emphasized, especially in high-frequency trading, where it accounts for a significant portion (up to 60-70%) of a trading strategy's profitability.

Moving on to the key aspects of algorithmic trading, the speaker discusses technology, infrastructure, and strategy. They highlight the prominent role of technology in today's algorithmic trading world, with technocrats and technology-oriented traders leading the way. Infrastructure is identified as a crucial factor that defines a trader's success probability, emphasizing the importance of the type of infrastructure used. Lastly, the speaker explains that the trading strategy itself is what ultimately determines profitability and success, accounting for 30-70% of a trader's overall success probability. They outline the different phases of strategy development, including ideation, modeling, optimization, and execution.

The stages of algorithmic trading, such as optimization, testing, and execution, are described by the speaker. They stress the importance of optimizing the input variables of a trading model to ensure consistent output before moving forward with execution. Additionally, when automating execution, the speaker cautions about potential risks and highlights the need for a robust risk management system to ensure safety and prevent operational risks. They mention that quotes on the leg statistically lead to major gains and higher returns per trades.

The risks involved in algorithmic trading are discussed, including the potential for significant losses, and the importance of operational risk management is emphasized. The speaker also highlights the infrastructure required for algorithmic trading, such as high-speed lines and collocations, which allow for faster execution. The practical steps of setting up an algorithmic trading desk are explained, starting with market access through obtaining a membership or opening an account with a broker. The speaker mentions that licensing requirements may vary depending on the regulator. Choosing the right algorithmic trading platform is crucial and depends on the specific strategy to be executed.

Algorithmic trading platforms and their selection based on the type of strategy are discussed by the speaker. For low-frequency trading strategies, brokers often provide free, web-based platforms that allow for automated trading using API code in various programming languages. For higher sensitivity to latency, deployable platforms can be used at a cost of a few hundred dollars per month. The speaker also emphasizes that the type of infrastructure used depends on the strategy, with high-frequency data and analysis requiring top-class performance servers.

The speaker elaborates on different types of access and infrastructure required for algorithmic trading, considering various regulations and technologies. They explain the concept of co-location and proximity hosting, highlighting factors like latency, order routing lines, and market data. The importance of having a robust database and analytics for strategy optimization is emphasized, especially when dealing with large amounts of tick-by-tick data. The cost of access to these tools and the level of data usage required for different trading strategies are explored.

The speaker explains that algorithmic trading demands more sophisticated tools than Excel, such as R or Matlab, for data processing and model building. They also mention the increased compliance and audit requirements that come with automation, which is a global trend. Traders are advised to ensure that their transactions are auditable, their codes and strategies have proper protection against edge cases or runaway cases, and have etiquette protection in place. It is also recommended to have a team with a basic understanding of analytics, technology, and financial markets, with at least one team member specializing in all three areas. This is compared to the conventional trading success recipe, which required skills like number crunching, pattern recognition, typing speed, financial market understanding, and discipline.

The speaker discusses the success recipe for quantitative trading using algorithmic trading. They emphasize the need for a strong mathematical and statistical understanding, as well as proficiency in financial computing. Understanding technology and market structure is crucial, along with an overall comprehension of how hardware functions and networks play a role in trading success. Financial market understanding is also essential, and knowing how to code and model a strategy is an added advantage. For those setting up higher frequency shops, all of these elements are vital. The speaker highlights the importance of EPAT for individuals entering the trading world, especially since many individuals in finance lack the necessary technology understanding for success.

The speaker talks about addressing the lack of understanding in technology among quantitative analysis tools required for trading. They mention the creation of the ePACT (Executive Program in Algorithmic Trading) for working professionals who want to gain expertise in algorithmic trading. The ePACT program is a six-month integrated online program that includes weekend classes for four to four and a half months, followed by an additional one and a half to two months of project work. The project work allows participants to specialize in their chosen domain. The program consists of nine different modules taught by industry practitioners to ensure the material covered aligns with industry needs and trends.

The various modules of the ePACT program are discussed, starting with an introduction to the financial market, basic statistics, derivatives and risk, advanced statistics, and quantitative trading strategy. The quantitative trading strategy module covers various trading strategies and also includes topics related to setting up an algorithmic trading desk and considering the business aspects involved. The program also covers the implementation of algorithmic trading platforms using Python, providing instruction on the basics of Python and how to implement trading strategies on different platforms. Participants are assigned a mentor to oversee their project work, which acts as a specialization within their chosen domain.

The speaker discusses the support services provided by the career services team to participants and alumni of the algorithmic trading program. They highlight the significance of learning by doing, live lectures, and access to recorded lectures. The speaker presents a graph showing industry requirements and the profiles companies are seeking in applicants, ensuring that the program covers relevant topics. They mention that the program has industry leaders as instructors from different countries and that their alumni are based in over 30 countries worldwide. The various events and programs organized by the institute to increase awareness of algorithmic trading are also highlighted.

The speaker proceeds to answer various questions from the viewers related to algorithmic trading. They confirm that U.S. citizens can open trading accounts in India but need to go through a custodian and follow a specific process to open an account with a clearing broker. The speaker recommends books by Dr. Ap Chan and Larry Harris for those interested in setting up an algorithmic trading desk or starting with algo trading. They also mention several platforms available in India for algorithmic trading, such as Symphony Fintech, Automated Trading, and YouTrade, among others. Real technical data can be obtained either directly from the exchange or through one's broker. Additionally, they confirm that students can take the same strategy they developed in the course and apply it to live trading.

The speaker continues to answer various questions from viewers regarding algorithmic trading. They explain that coding and backtesting a strategy using different tools is possible and not difficult to port to live trading. Questions regarding regulations, compliance, and licensing for trading in the Indian market are also addressed. The speaker explains that permission is required from the exchange for eligible automated trading strategies and that a demo is necessary. They also discuss popular trading strategies, such as momentum-based, statistical arbitrage, and machine learning-based strategies.

The speaker discusses the types of trading strategies covered in the course and emphasizes the importance of learning how to develop new strategies, test them, and execute them. They answer questions about job prospects for course graduates, average salaries offered, and the programming skills required to analyze candlestick patterns. Concerns about knowledge level and time commitment for working professionals taking the course, as well as the costs associated with setting up an algorithmic trading desk in India, are also addressed. The speaker emphasizes the importance of having a basic understanding of key concepts before starting the program to maximize its value.

The speaker answers various questions related to algorithmic trading, suggesting that individuals with limited knowledge of stock markets can contact a sales specialist for guidance to gain a basic understanding of these domains before proceeding with the course. They explain that algorithmic trading is useful to individual traders who want to ensure discipline in their trades and scale up their strategies to include multiple instruments. The speaker also addresses concerns regarding transitioning from one course to another and brokers in India who offer algo trading services. Finally, they explain that server colocation at an exchange does not provide undue advantage to algorithmic traders but benefits retail traders by providing tighter bid-ask spreads.

The speaker discusses the benefits of algorithmic trading for retail traders and how technology can help minimize losses. They address questions about non-programmers learning Python for algorithmic trading and whether Indian residents can trade in global markets. They clarify that their firm primarily focuses on education rather than providing brokerage or algorithmic trading platforms. The speaker emphasizes that their program has helped hundreds of participants from over 30 countries and encourages interested individuals to contact their business development and sales teams for more information.

The speaker addresses several questions from viewers, including whether all strategies need to be approved by the exchange and how to protect a strategy. They explain that algo providers cannot see a trader's strategy, and exchanges are primarily concerned with ensuring strategies don't cause market havoc. They mention a student discount for the program and discuss the availability of algo trading in commodities markets in India. Furthermore, they highlight the importance of linear algebra and probability distribution in HFT profiles, depending on the role, and emphasize that algo trading can be applied worldwide to any trading instrument, including options and forex.

The speakers discuss coding strategies, providing reusable code, and the necessity of learning Python and R. They also answer questions regarding the validation of strategies, potential ROI, and the necessary infrastructure for a moderate number of traders. The speakers caution against sharing strategies with others and suggest focusing on learning best practices and developing unique trading strategy ideas.

The speakers answer various questions on algorithmic trading, including the ideal time frame for backtesting a strategy, the minimum internet bandwidth required for moderate-volume trading, and how to bypass brokerage obtaining. They also discuss the best vendors for algorithmic trading in India and whether discretionary trading strategies like the Elliot wave theory can be programmed. The speakers suggest that any strategy can be coded if one is comfortable with programming and has clear rules in mind. They advise traders to choose vendors based on their individual requirements and the pros and cons of each vendor.

In conclusion, the speaker thanks the attendees and offers further assistance. Although they were unable to answer all the questions due to time constraints, the speaker encourages the audience to send in their inquiries and provides contact information for the Quant Institute team. They express their appreciation for the interest in algorithmic trading and emphasize the importance of continuous learning and practice in this field.

  • 00:00:00 The speaker introduces the informative session on algorithmic trading and welcomes viewers. They express gratitude towards the growing interest in the segment of algorithmic trading and the impact it has had over the years. The speaker introduces the co-founder of IH and Quant Institute, Nitesh, who will be speaking at the session. Nitesh has rich experience in financial markets and will provide an overview of algorithmic trading, trends, and opportunities for beginners. The speaker also highlights recent news articles that demonstrate the increasing popularity of algorithmic trading and its expected growth rate of over 10% CAGR globally over the next five years.

  • 00:05:00 The speaker discusses the growth and opportunities in algorithmic trading, a domain which is rapidly expanding with double-digit percentage numbers across the globe. The speaker presents data from different exchanges highlighting the increasing volumes of algorithmic trading in both equity and commodity markets. The definition of algorithmic trading is provided as the process of using computers programmed to follow a defined set of instructions for placing trading orders to generate profits at high speed and frequency. The involvement of technology is emphasized as a critical aspect of algorithmic trading, especially in high-frequency trading, where it accounts for up to 60-70% of the reason why a trading strategy is making money.

  • 00:10:00 The speaker discusses the key aspects of algorithmic trading, which include technology, infrastructure, and strategy. The role of technology in algorithmic trading is prominent in today’s world with technocrats and technology-oriented traders leading the pack. Infrastructure plays a big role, and the type of infrastructure used defines the success probability of a trader. Lastly, the trading strategy is what makes money and accounts for 30-70% of a trader’s success probability. The speaker explains the different phases of strategy development from ideation to modeling and optimization to execution.

  • 00:15:00 The speaker describes the stages of algorithmic trading that involve optimization, testing, and execution. They emphasize the importance of optimizing the input variables of a model to ensure consistent output before moving on to execution. Additionally, when automating execution, the speaker warns of the potential risks and stresses the need for a risk management system to ensure safety and prevent operational risk. They suggest quotes on the leg statistically leading to major gains and higher return per trades.

  • 00:20:00 The risks involved in algorithmic trading are discussed, such as the potential for huge losses and the importance of operational risk management. The infrastructure required for algorithmic trading is also highlighted, including high-speed lines and collocations. Moving on to the practical steps of setting up an algorithmic trading desk, market access is a crucial first step, either by obtaining a membership or opening an account with a broker. Licensing requirements may vary depending on the regulator. Choosing the right algorithmic trading platform ultimately depends on the strategy to be executed.

  • 00:25:00 The speaker discusses algorithmic trading platforms and how to choose one based on the type of strategy being used. For low-frequency trading strategies, brokers often provide free, web-based platforms that allow for automated trading using API code for various programming languages. For those with higher sensitivity to latency, a deployable platform can be used for a few hundred dollars per month. The speaker also notes that the type of infrastructure used will depend on the type of strategy being deployed, with high-frequency data and analysis requiring a server for top-class performance.

  • 00:30:00 The speaker discusses the different types of access and infrastructure required for algorithmic trading and how this can depend on various regulations and technologies. The concept of co-location and proximity hosting is explained, along with considerations like latency, order routing lines, and market data. The importance of having a good database and analytics for strategy optimization is also emphasized, especially when dealing with large amounts of tick-by-tick data. The cost of access to these tools and the degree of data usage necessary for different trading strategies are also explored.

  • 00:35:00 The speaker explains that algorithmic trading requires more sophisticated tools than Excel, such as R or Matlab, to process data and build models. Automation also brings more compliance and audit requirements, which is a trend globally. Algorithmic traders need to ensure their transactions are auditable and that their codes and strategies do not have any edge cases or runaway cases, and have etiquette protection in place. Additionally, traders need a team with a basic understanding of analytics, technology, and financial markets, with at least one team member specializing in all three. The speaker compares this to the conventional trading success recipe where number crunching, pattern recognition, typing speed, financial market understanding, and discipline were essential.

  • 00:40:00 The speaker discusses the success recipe for quantitative trading using algorithmic trading. It requires a strong mathematical and statistical understanding, as well as financial computing. Understanding technology and market structure is also necessary, along with overall comprehension of how hardware functions and networks come into play in trading success. Additionally, financial market understanding is required, and knowing how to code and model your strategy is an added advantage. For those setting up a higher frequency shop, all of these elements are vital. The speaker brings attention to EPAT, which is crucial for those who wish to enter the trading world, especially when most individuals in finance lack the understanding of technology needed for success.

  • 00:45:00 The speaker talks about how they addressed the lack of understanding in the technology of the different quantitative analysis tools that are necessary for trading. The epat executive program algorithmic trading was created for working professionals who wanted to gain expertise in algorithmic trading. The six-month integrated online program included weekends of classes for four to four and a half months and an additional one and a half to two months of project work. The project work acted as a tool for specialization in the domain participants wanted to build their expertise. The program was comprised of nine different modules and taught by practitioners in the industry to ensure that the material covered was aligned with the industry needs and trends.

  • 00:50:00 The various modules of the ePACT program are discussed, starting with an introduction to the financial market, basic statistics, derivatives and risk, advanced statistics, and quantitative trading strategy. The latter includes various trading strategies and also covers the business environment, such as setting up an algorithmic trading desk and the business aspects that need to be considered. The algorithmic trading platform using Python is also a module of the program, which covers the basics of Python and the implementation of trading strategies on different trading platforms, including Algorithmic Trading Platforms. The program includes a project that acts as specialization, assigning a mentor to the participant who can oversee the project work.

  • 00:55:00 The speaker discusses the various support services provided by the career services team to both the participants and alumni of the algorithmic trading program. They also mention the significance of learning by doing, live lectures, and access to recorded lectures. Furthermore, the speaker presented a graph showing the industry requirements and profiles that companies are looking for in applicants. This information helps ensure that the program covers relevant topics. The program has industry leaders as instructors from different countries, and their alumni are based in over 30 countries worldwide. Finally, they highlighted the various events and programs they have organized to increase awareness of algorithmic trading.

  • 01:00:00 The speaker answers various questions related to algorithmic trading. He confirms that U.S. citizens can open trading accounts in India, but they need to come through a custodian and follow a process to get the account open with the clearing broker. The speaker recommends books by Dr. Ap Chan and Larry Harris to those who want to set up an algo trading desk or start with algo trading. He also mentions various platforms available in India for algo trading like Symphony Fintech, Automated Trading, and YouTrade, among others. He informs that users can get real technical data either directly from the exchange or through their broker. Furthermore, he confirms that students can take the same strategy they developed in the course live in the real trading environment.

  • 01:05:00 The speaker answers various questions from viewers regarding algorithmic trading. The speaker explains that coding and backtesting a strategy using different tools is possible, and it is not difficult to port it to live trading. Viewers are also asking about regulating, compliance, and licensing for trading in the Indian market. The speaker explains that before taking any eligible automated trading strategy, permission is required from the exchange where the demo is necessary. Some popular trading strategies such as momentum-based, statistical arbitrage, and machine learning-based trading strategies are also discussed.

  • 01:10:00 The speaker discusses the types of trading strategies covered in the course, emphasizing the importance of learning how to come up with new strategies and how to test and execute them. The speaker also answers questions related to job prospects for those who complete the course, the average salaries offered, and the programming skills required to analyze candlestick patterns. They also address concerns about the required knowledge level and time commitment for working professionals taking the course, and the costs associated with setting up an algorithmic trading desk in India. The speaker emphasizes the importance of having a basic understanding of key concepts before starting the program, in order to extract maximum value from it.

  • 01:15:00 The speaker answers various questions related to algorithmic trading. They suggest that those who have limited knowledge in stock markets can contact a sales specialist for guidance to get a basic understanding of these domains and then proceed with the course. They explain that algo trading is useful to individual traders if they want to ensure discipline in their trades and scale up their strategy to include multiple instruments. The speaker also addresses concerns regarding migrating from one course to another and brokers in India who offer algo trading services. Finally, they explain that server colocation at an exchange does not provide undue advantage to algorithmic traders and actually benefits retail traders by providing tighter bid-ask spreads.

  • 01:20:00 The speaker discusses the benefits of algorithmic trading for retail traders, and how they can minimize losses with the use of technology. The speaker also addresses questions from participants, including whether it is possible for non-programmers to learn Python for algorithmic trading and whether Indian residents can trade in global markets. Additionally, the speaker clarifies that their firm is primarily focused on education rather than providing brokerage or algorithmic trading platforms. The speaker emphasizes that their program has helped hundreds of participants from over 30 countries, and encourages interested individuals to contact their business development and sales teams for more information.

  • 01:25:00 The speaker addresses several questions from viewers, including whether all strategies need to be approved by the exchange and how to protect the strategy. They explain that algo providers would not be able to see your strategy, and the exchanges are more concerned with ensuring the strategy doesn't cause market havoc. They also mention a student discount for the program and the availability of algo trading in commodities markets in India. Furthermore, they highlight the importance of linear algebra and probability distribution in an HFT profile depending on the role and that algo trading can be applied worldwide to any trading instrument, including options and forex.

  • 01:30:00 The speakers discuss coding strategies, providing reusable code, and the necessity of learning Python and R. They also answer questions regarding the validation of strategies, the potential ROI, and the needed infrastructure for a moderate number of traders. The speakers caution against sharing your strategy with others and suggest focusing on learning the best practices and coming up with your own trading strategy ideas.

  • 01:35:00 The speakers answer various questions on algorithmic trading, including the ideal time frame for backtesting a strategy, the minimum internet bandwidth required for moderate volume trading, and how to bypass brokerage obtaining. They also discuss the best vendors for algorithmic trading in India and whether discretionary trading strategies like the Elliot wave theory can be programmed. The speakers suggest that any strategy can be coded if you are comfortable with programming and have clear rules in mind. They advise traders to choose vendors based on their individual requirements and the vendor's pros and cons.

  • 01:40:00 The speaker concludes the informative session on algorithmic trading by thanking the attendees and offering further assistance. Though they were unable to answer all the questions due to time constraints, the speaker encourages the audience to send in their inquiries and provides contact information for those interested in the program or algorithmic trading in general. The speaker also invites feedback from the attendees through a survey to help plan future webinars.
Informative Session on Algorithmic Trading
Informative Session on Algorithmic Trading
  • 2016.11.03
  • www.youtube.com
Know everything you wanted to know about Algorithmic Trading from the stalwart market practitioner Nitesh Khandelwal, Founder of iRage.Most Useful linksJoin ...
 

Impact of Brexit and Recent Market Events on Algorithmic Trading - July 19, 2016



Impact of Brexit and Recent Market Events on Algorithmic Trading - July 19, 2016

Nitesh Khandelwal brings a wealth of experience in the financial markets, having worked across various asset classes in different roles. He is the co-founder of iRageCapital Advisory Private Limited, a reputable company that specializes in providing Algorithmic Trading technology and strategy services in India. Nitesh played a pivotal role in driving the business aspects of iRageCapital and QuantInsti. At QuantInsti, he also served as the head of the derivatives and inter-market studies training department. Currently, he holds the position of Director at iRage Global Advisory Services Pte Ltd in Singapore. Nitesh has a background in bank treasury, with expertise in the FX and interest rate domains, as well as experience in proprietary trading desks. He holds a Bachelor's degree in Electrical Engineering from IIT Kanpur and a Post-Graduation in Management from IIM Lucknow.

Recent global events such as Brexit and the resulting volatility in the currency market have caused significant concern among investors. It is natural for risk aversion to increase after such events, as market participants exercise caution in their trading activities. However, even during such turbulent times, automated traders are thriving. Media reports indicate that hedge funds employing algorithmic trading consistently outperform manual traders, particularly in stressful market conditions.

Informative Session Contents:

  1. Analysis of the Biggest Trading Events of the Season

    • Examining the impact of Brexit on different market participants globally
    • Understanding the consequences of transaction cost increases, such as the STT hike by SEBI in India
    • Exploring how algorithmic trading firms have responded to these events
  2. Requirements for Becoming a Quant/Algo Trader

    • Identifying the industry requirements for aspiring traders in this field
    • Highlighting the essential skills and knowledge needed to succeed
    • Explaining the benefits of Quantinsti's Executive Program in Algorithmic Trading in developing these skills
Impact of Brexit and Recent Market Events on Algorithmic Trading - July 19, 2016
Impact of Brexit and Recent Market Events on Algorithmic Trading - July 19, 2016
  • 2016.07.20
  • www.youtube.com
Tuesday, July 19, 201606:00 PM IST | 08:30 PM SGT | 12:30 PM GMTIntroduction: Recent global event such as Brexit and the subsequent volatility in the currenc...
 

Quantitative Trading using Sentiment Analysis | By Rajib Ranjan Borah



Quantitative Trading using Sentiment Analysis | By Rajib Ranjan Borah

Sentiment Analysis. also known as opinion mining, is the process of computationally identifying and categorizing opinions expressed in a piece of text, especially in order to determine whether the writer’s attitude towards a particular topic, product, etc. is positive, negative, or neutral.

Quantitative Trading using Sentiment Analysis | By Rajib Ranjan Borah
Quantitative Trading using Sentiment Analysis | By Rajib Ranjan Borah
  • 2016.06.29
  • www.youtube.com
This session focuses on explaining Quantitative Trading using Sentiment Analysis. The video discusses how quantitative analysis of news can be used to make a...
 

Informative Session about Algorithmic Trading by Nitesh Khandelwal - May 24, 2016



Informative Session about Algorithmic Trading by Nitesh Khandelwal - May 24, 2016

Session Contents:

  • An overview of the Algorithmic Trading industry
  • Current market share and volumes
  • Growth and future of Algorithmic Trading globally
  • Risk measures and technological advancements
  • How to get started
  • Free and cheap ways to test waters
 

Leveraging Artificial Intelligence to Build Algorithmic Trading Strategies



Leveraging Artificial Intelligence to Build Algorithmic Trading Strategies

The CEO and co-founder of a trading strategy development company explains the exciting potential of AI and machine learning in algo trading. These tools have been proven successful by large quantitative hedge funds, and their accessibility has increased significantly thanks to open-source libraries and user-friendly tools that don't require strong math or computer science backgrounds. The speaker also introduces key terms related to AI and machine learning in the context of algorithmic trading. Artificial intelligence is defined as the study of intelligent agents that perceive their environment and take action to maximize success. Machine learning, a subset of AI, focuses on algorithms that can learn and make predictions without explicit programming. Pattern recognition, a branch of machine learning, involves uncovering patterns in data, while association rule learning involves forming if-then statements based on those patterns. The speaker briefly mentions the concept of Big Data, which is characterized by its four V's: volume, velocity, variety, and veracity.

The presenter outlines the terms and concepts to be discussed, including big data, veracity, artificial intelligence, machine learning, pattern recognition, and data mining. They then delve into best practices and common pitfalls when building algorithmic trading strategies. These include defining tangible objectives for success, prioritizing simplicity over complexity, focusing on creating a robust process and workflow instead of relying on a single model, and maintaining a healthy skepticism throughout the entire process to avoid biased results.

The speaker proceeds to discuss how machine learning can address the challenge of selecting indicators and data sets for building trading strategies. Decision trees and random forests are introduced as techniques to identify important indicators by searching for the best data splits. Random forests are noted to be more robust and powerful than decision trees, albeit more complex. The speaker also explores how combining indicator sets using a technique called "wrapper" can create a more powerful combination.

Next, the speaker discusses the use of technical indicators in algorithmic trading strategies and their benefits in identifying underlying patterns and trends. The question of optimizing indicator parameters based on machine learning is raised, and the concept of ensemble learning is introduced, which combines multiple classifiers to analyze data and uncover different patterns and information. The distinction between feature selection and feature extraction in machine learning is also mentioned, with a reminder to be mindful of curve fitting when utilizing multiple classifiers.

The presenters demonstrate the combination of pattern recognition and association rule learning as a way to leverage machine learning algorithms while still maintaining interpretability for trading strategies. They provide an example using a support vector machine to analyze the relationship between a three-period RSI and the price difference between the open price and a 50-period SMA on the Aussie USD. Clear patterns are translated into trading rules. However, they acknowledge the limitations of this method, such as analyzing high-dimensional data, automation challenges, and interpreting the output. The speaker introduces Trade as a possible solution to address these concerns and allow traders to leverage algorithms with any indicators they desire.

The presenter proceeds to demonstrate how to build trading strategies using a cloud-based trade platform. They use the example of building a strategy for trading the Aussie USD on a daily chart using five years of data. To avoid curve fitting, the algorithm is trained only until January 1, 2015, leaving a year of out-of-sample data for testing. The importance of not wasting this out-of-sample data to avoid biased backtesting is emphasized. Using machine learning algorithms for indicator analysis and pattern identification is presented as a flexible and powerful approach to optimizing trading strategies.

The presenter continues by demonstrating the process of building a trading strategy using Trade-Ideas' platform and the open-source indicator library TA Lib. They analyze the price movement of Aussie USD over a five-year span, identify ranges with strong signals, and refine rules for going long by selecting indicator ranges and noting their relationships. By adding a rule for price relative to a 50-period SMA, they identify two different ranges with strong signals. The advantage of using Trade-Ideas is highlighted, as it allows for analysis of machine learning algorithm results and building rules directly from histograms for clearer interpretation.

The presenter discusses the procedure for building short rules for a trading strategy, including selecting the right indicators and refining rules to find strong short signals. Testing and exploring different patterns with the indicators are emphasized to find the optimal strategy. Generating code and testing the strategy out-of-sample in MetaTrader4, with the inclusion of transaction costs, is also demonstrated. The presenter confirms that the approach is related to algorithmic trading.

The speaker explains how to test the strategy built on the most recent out-of-sample data, which was not used during the strategy building process. The simulation is conducted using MetaTrader, a popular trading platform for currencies and equities. The platform's active community of developers creates automated strategies, custom indicators, and provides an excellent opportunity for testing and trading on the same data. The focus of the simulation is to assess the strategy's performance on out-of-sample data. The speaker mentions that the tool is developed by a startup planning to make it available for free by white-labeling it directly to brokerages.

The speaker addresses the incorporation of risk and money management techniques into a strategy after backtesting. Simple take profit and stop-loss measures are discussed as ways to decrease drawdowns and protect against downside risks. To guard against curve fitting, the speaker emphasizes the use of wide bin selections, out-of-sample testing, and demo accounts before going live. The preference for simplicity and transparency over black box neural networks in trading strategies is also mentioned.

During the presentation, the speaker addresses questions regarding the comparison of their platform to others, such as Quanto Pian or Quanto Connect, highlighting that their platform focuses more on strategy discovery and analysis rather than automating existing strategies. The importance of technical data in automated strategies is acknowledged, while also noting that their platform includes other datasets, such as sentiment indicators. MetaTrader 4 is demonstrated as a useful tool, and the significance of risk and money management strategies in trading is discussed. The speaker also covers best practices and common pitfalls in automated trading strategies.

The speaker discusses the use of indicators in trading strategies, emphasizing the trade-off between complexity and overfitting. They recommend using three to five indicators per strategy to strike a balance between containing sufficient information and avoiding overfitting. The importance of the data or feature fed into the algorithm and how the output is implemented is highlighted. The underlying algorithm is considered less crucial than the indicators used and their implementation. Questions about using the genetic optimizer in MetaTrader 4 and the importance of aligning indicators with the platform are also addressed.

The speaker explores the application of machine learning in value investing. The same process discussed earlier for algorithmic trading can be applied to value investing, but instead of technical indicators, datasets that quantify the inherent value of a company are used. Market cap or price-earnings ratio, for example, can reveal the relationship between these data and the price movement of the asset. Optimizing for return per trade and identifying when an algorithm is out of sync with the market are also discussed. Python and R are recommended as suitable programming languages, depending on one's coding experience and background.

Lastly, the speaker highlights the essential skills and knowledge required for algorithmic trading, which involve merging finance and technology. Understanding the markets, big data statistics, and technology for automating strategies are crucial. Quantitative education programs are suggested as a means to acquire the necessary training in various operations and skills for becoming a successful algorithmic trader. Python is recommended as a great option for building algorithms.

  • 00:00:00 The CEO and co-founder of a trading strategy development company explains why AI and machine learning are exciting tools for algo trading and how they have been proven successful by large quantitative hedge funds. He also highlights that the accessibility of these tools has increased significantly due to open-source libraries and tools that require no strong math or computer science backgrounds. This section also covers the basic terminology and best practices for traders and quants to apply these techniques, as well as specific applications to improve trading results.

  • 00:05:00 The speaker provides definitions for key terms related to artificial intelligence and machine learning as it pertains to algorithmic trading. Artificial intelligence is defined as a study of intelligent agents that perceive their environment and take action to maximize their chance of success. Machine learning, a subset of AI, focuses on algorithms that can learn and make predictions without explicit programming. Pattern recognition is the branch of machine learning focused on uncovering patterns in data, and Association rule learning involves formatting those patterns into if-then statements. Finally, the speaker briefly touches on Big Data, stating that it follows the four V's of volume, velocity, variety, and veracity.

  • 00:10:00 The speaker outlines some of the terms and concepts that will be discussed in the presentation including big data, veracity, artificial intelligence, machine learning, pattern recognition, and data mining. The speaker then goes on to provide some best practices and common pitfalls to avoid when building algorithmic trading strategies. These include defining success with tangible objectives, prioritizing simplicity over complexity, focusing on creating a robust process and workflow rather than a singular model, and having a healthy dose of skepticism throughout the entire process to avoid bias towards positive results.

  • 00:15:00 The speaker discusses how machine learning can help solve the problem of figuring out which indicators and data sets to use when building a trading strategy. The speaker explains how decision trees and random forests can be used to select indicators by searching for the indicators and values that best split the data set, with the indicators at the top of the tree being more important and having a higher relationship to the data set. The speaker also mentions that random forests are more robust and powerful than decision trees, but also more complex. Additionally, the speaker explores how indicator sets can be used together to create a more powerful combination using a technique known as a wrapper.

  • 00:20:00 The speaker discusses the use of technical indicators in algorithmic trading strategies and the benefits they offer in identifying underlying patterns and trends. They also address the question of whether it is possible to optimize indicator parameters based on machine learning, and highlight the use of ensemble learning to combine multiple classifiers and analyze data in order to find different patterns and information. The speaker then touches on the difference between feature selection and feature extraction in machine learning, and acknowledges the importance of being conscious of curve fitting when utilizing multiple classifiers.

  • 00:25:00 The presenters discuss the combination of pattern recognition and association rule learning as a way to leverage machine learning algorithms while still being able to interpret the output and apply it to their trading strategies. They provide an example of using a support vector machine to analyze the relationship between a three period RSI and the difference in price between the open price and a 50 period SMA on the Aussie USD. The output produced clear patterns that were translated into trading rules. While this method allows traders to utilize their own intuition and experience, it also has several disadvantages, such as difficulties in analyzing data with high dimensionality, automation, and interpreting the output. Trade is presented as a possible solution that addresses these concerns and allows traders to leverage these algorithms to analyze any indicators they want.

  • 00:30:00 The presenter demonstrates how to build trading strategies on a cloud-based trade platform. The example given is building a strategy for trading the Aussie USD on a daily chart using the last five years of data. To avoid curve fitting, the algorithm is only trained up until January 1, 2015, leaving a year of out-of-sample data to test the strategy on that it hasn't seen before. The presenter emphasizes the importance of not wasting this out-of-sample data to avoid bias towards selecting a backtest that performs well on a particular data set. Using machine learning algorithms for analyzing indicators and finding underlying patterns is a more flexible and powerful way to optimize trading strategies.

  • 00:35:00 The presenter demonstrates how to build a trading strategy using Trade-Ideas' platform and the open-source indicator library TA Lib. They start by analyzing the price movement of Aussie USD over a five-year span and identify ranges where the algorithm was able to find strong signals. They refine the rules for going long by selecting indicator ranges and noting the relationship between them. By adding a rule for the price relative to a 50 period SMA, they can see two different ranges where trades algorithms found strong signals. The advantage of using Trade-Ideas is that it allows for the analysis of the results of machine learning algorithms, finding where the strongest signals are and building rules directly off the histograms to see exactly what the rules are saying.

  • 00:40:00 The presenter discusses the procedure for building short rules for a trading strategy, including selecting the right indicators and refining rules to find a strong short signal. The presenter emphasizes the importance of testing and exploring different patterns with the indicators to find the best strategy. The discussion then moves on to generating code and testing the strategy out-of-sample in MetaTrader4, with the ability to incorporate transaction costs. The presenter confirms that the approach is algorithmic trading.

  • 00:45:00 The presenter explains how to test the strategy they have built on the most recent out-of-sample data, which was not used in the strategy building process. The simulation is being run on a popular trading platform called MetaTrader for trading currencies and equities. The platform has an active community of developers who develop automated strategies, custom indicators and provides an excellent opportunity to test analysis and trade on the same data used for trading. The focus of the simulation is to test the strategy's performance on out-of-sample data. The tool is developed by a start-up that plans to make it available for free by white labeling it directly to brokerages.

  • 00:50:00 The speaker explains how to incorporate risk and money management techniques into a strategy after backtesting. Adding a simple take profit and stop loss could significantly decrease the drawdown and protect against downside risks. The speaker then addresses a question on how to guard against curve-fitting in algorithmic trading. To avoid overfitting, the speaker emphasizes using wide bin selections, out-of-sample testing, and demo accounts before going live. Lastly, the speaker notes that their personal preference is for simplicity and transparency over black box neural networks for trading strategies.

  • 00:55:00 The speaker addresses questions about how their platform compares to others, such as Quanto Pian or Quanto Connect, which focus more on automating existing strategies, whereas their platform is more focused on strategy discovery and analysis. They also discussed the importance of technical data in automated strategies, but highlighted that their platform also includes other data sets, such as sentimental indicators. In addition, the speaker demonstrated the use of MetaTrader 4 and discussed the importance of risk and money management strategies in trading. Finally, the speaker discussed the best practices and common pitfalls in automated trading strategies.

  • 01:00:00 The speaker discusses the use of indicators in trading strategies and the trade-off between complexity and overfitting. They recommend using three to five indicators per strategy to strike a balance between containing a lot of information and overfitting. The speaker also discusses the importance of the data or feature being fed into the algorithm and how the output is implemented. They emphasize that it's less about the underlying algorithm and more about the indicators being used and how they are implemented. The speaker also addresses questions about using the genetic optimizer in Metatrader4 and the importance of using the same indicators as the ones used by the platform.

  • 01:05:00 The speaker discusses using machine learning for value investing. The same process that was previously discussed for algorithmic trading can be used for value investing, but instead of technical indicators, investors would use datasets that are important in quantifying the inherent value of a company. For example, an investor could use market cap or price-earnings ratio to see the relationship between these data and the price move of the underlying asset. The speaker also discusses ways to optimize for return per trade and how to know when an algorithm is out of sync with the market. Lastly, the speaker discusses the ease of learning Metatrader and mentions that both Python and R have great libraries for machine learning, depending on one's experience and background in coding.

  • 01:10:00 The speaker discusses the necessary skills and knowledge required for algorithmic trading, which involves merging finance and technology. To design successful trading strategies, one must understand the markets, big data statistics, and technology to automate the strategies. Quantitative education programs can provide training on the various operations and skills necessary to become a successful algorithmic trader. Python is also recommended as a great option for those who want to build their algorithms.
Leveraging Artificial Intelligence to Build Algorithmic Trading Strategies
Leveraging Artificial Intelligence to Build Algorithmic Trading Strategies
  • 2016.03.23
  • www.youtube.com
In this session on "Leveraging Artificial Intelligence to Build Algorithmic Trading Strategies", our guest speaker, Mr Tad Slaff, CEO of Inovance covered the...
Reason: