Quantitative trading - page 26

 

Algorithmic Trading in Commodity Markets



Algorithmic Trading in Commodity Markets

Sunil Lani, Assistant Vice President at NCDEX (National Commodity and Derivatives Exchange), takes the opportunity to delve into the world of algorithmic trading in commodity markets, specifically focusing on agricultural commodities. NCDEX, being the largest agricultural exchange in India, offers a diverse range of approximately 20 commodities for trading.

Lani begins by introducing the three popular trading styles commonly employed in commodity markets: hedging, arbitrage, and directional trading. He highlights hedging as an investment strategy used to mitigate risk associated with a primary investment. In the context of NCDEX, farmers often hedge their underlying agricultural assets to minimize risk exposure.

Moving on, the speaker shifts the discussion towards two types of trading strategies prevalent in commodity markets: hedging and arbitrage. Lani emphasizes the significance of highly correlated underlying assets in hedging strategies. For arbitrage trading, he delves into two specific approaches: calendar spread and pair trading, noting that the latter shares similarities with hedging strategies. Lani emphasizes the importance of selecting highly correlated and cointegrated commodities for pair trading, suggesting the application of the T Fuller test to ensure the validity of correlations.

In addition, Lani provides an overview of the various stages involved in algorithmic trading. He explains that the process begins with identifying and filtering out appropriate scripts or instruments to apply the trading concept. Subsequently, the model is visualized, followed by rigorous backtesting and optimization of parameters or the model itself. The next steps involve paper trading and eventually transitioning to live trading, where real money is at stake.

Continuing his discussion, Lani focuses on the initial steps of algorithmic trading. He emphasizes the importance of brainstorming trading ideas and finalizing a trading logic that aligns with the trader's objectives. Key considerations include determining the frequency of trades, selecting the appropriate segment for trading, and establishing the backtesting periods. To illustrate the challenges of understanding data for trading strategies, the speaker presents data on India's Gross Domestic Production (GDP) across various sectors. He converts the data into graphical representations, facilitating a better understanding and suggests examining correlations with price movements. Furthermore, Lani showcases visual representations of historical agricultural data, emphasizing the importance of analyzing data from multiple perspectives.

The speaker proceeds to discuss the resources required for algorithmic trading in commodity markets. He categorizes trading strategies into two main areas: arbitrage and momentum. Techniques such as pair trading, correlation analysis, moving averages, and probability distribution are commonly employed. Infrastructure is a crucial aspect of algorithmic trading, including connectivity to a broker through an API and hosting the algorithm either in the cloud or on-premise. Lani also highlights the significance of data visualization and technical indicators, which can be accomplished using tools like Excel, Tableau, Power BI, and TradingView.

Lani further explores various tools and platforms suitable for algorithmic trading in commodity markets. He mentions that non-programmers or semi-programmers often opt for platforms like Metatrader and Interactive Brokers. For pure programming purposes, Python emerges as the leading language, with Python-based algorithmic trading platforms such as Quantopian, Blueshift, QuanTX, and Zerodha gaining popularity. Additionally, the speaker highlights essential libraries for data processing and backtesting, including Pandas, Numpy, Beautifulsoup, Backtrader, as well as sentiment analysis libraries like Stream Python, Feedparser, Peopie, and NLP.

In the subsequent segment, Lani explains the process of generating a trading idea and designing a model using agricultural commodities as an example. Given that agricultural commodities tend to be less volatile than equities or Forex, he proposes applying a mean reversion strategy using Bollinger Bands as an indicator, specifically set at two standard deviations from the mean price range. The filtering criteria for selecting a liquid commodity involve choosing one with a volume of at least 1080, and Lani recommends trading Jana in the NCDX. To visualize the model, Lani suggests using investing.com to draw the Bollinger Bands, with different levels indicating the buy and sell points.

Shifting the focus to backtesting, Lani emphasizes its importance in verifying the logic of an algorithmic trading model using historical data. This step is crucial to avoid potential losses when the model is deployed in a live environment. Lani explains the steps involved in backtesting, which include downloading data from an open portal, importing relevant libraries, writing supporting functions, generating buy and sell signals, visualizing the output, and evaluating the return generated by the strategy. He also suggests considering parameters such as returns, maximum drawdown, maximum profit, and stop-loss during the backtesting process. Lani advises using personal backtesting functions instead of relying solely on libraries obtained from platforms like Github.

The speaker proceeds to explain the various parameters a function takes in to generate buy and sell signals based on data frames, strategy types, entry and exit criteria, and positional feed. Traders can configure the open or closed price for their calculations, as well as set stop-loss and target percentages. Lani also discusses a statistical reporting function and another function that creates levels using standard deviation for a chosen indicator. Finally, the main function invokes these other functions to return buy and sell signals based on the chosen strategy and generate a summary.

Moving forward, Lani demonstrates how to generate trading backtesting reports using BV practice positional skill. The output includes a data frame containing all the trades, transaction charges, and slip edges. The backtesting function is invoked, and the reports are generated. These reports provide statistics and graphical representations of the output, showcasing the percentage returns, transaction details, and cumulative returns over a specified time period. Lani analyzes the report and suggests setting a stop-loss at around -1.5 to avoid losses exceeding -2% or -3%. The maximum profit obtained from the backtesting results was 8%, indicating that the stop-loss can be set at a maximum of 8% or 9%.

The speaker then discusses the process of optimizing an algorithm. Lani explains that one approach to optimization involves creating another algorithm that runs the original algorithm multiple times using different sets of parameters. To illustrate this, he provides an example where the look-back period for a rolling back period is optimized. By creating a list of various values for the look-back period and utilizing a combination function, a comprehensive list of all parameter sets can be generated. Lani emphasizes the importance of optimizing algorithms to enhance their performance in commodity markets.

Continuing the discussion on optimization, Lani explains the process of using three lists to evaluate each pair through the elbow method with different parameters for backtesting. The backtesting results are stored in a data frame called DF optimizer, allowing the identification of the combination that yields the maximum returns. The optimized variables are then stored in the optimized role. Lani cautions against overfitting the data during the optimization process and highlights the importance of running the same parameters on the next period to ensure their accuracy. Finally, the speaker downloads the report to examine the results.

Lani proceeds to present the code utilized for optimizing trading parameters and shares the resulting statistics, including returns, mean returns, maximum drawdown, and win-loss ratio. The optimized parameters resulted in a return of 22.8%, a significant improvement compared to the 9% achieved with the previous parameter combination. Lani underscores the importance of paper trading to test algorithms without risking real money and emphasizes the need for diversification, portfolio management, and risk management when transitioning to live trading. He concludes by noting the similarities between the development process of algorithmic trading and the software product development lifecycle, emphasizing the importance of executing all stages diligently to ensure project success.

  • 00:00:00 Sunil Lani, Assistant Vice President at NCDEX, discusses algorithmic trading in commodity markets, specifically agriculture commodities. NCDEX is the largest agricultural exchange in India and offers around 20 commodities for trading. Lani explains that there are three popular trading styles when it comes to commodities: hedging, arbitrage, and directional trading. Hedging is an investment to mitigate risk from a primary investment, and in NCDEX, farmers hedge against their own underlying assets to minimize risk.

  • 00:05:00 The speaker discusses two types of trading strategies in the commodity market: hedging and arbitrage. The speaker emphasizes the importance of highly correlated underlying assets in hedging. Meanwhile, in arbitrage, the speaker delves into two ways of trading: calendar spread and pair trading, the latter being similar to hedging. The speaker stresses that the commodities chosen for pair trading should be highly correlated and cointegrated, and recommends applying the T Fuller test to ensure this. Additionally, the speaker provides an overview of the various stages of algorithmic trading, which includes identifying and filtering out scripts or instruments to apply the trading concept, visualizing the model, backtesting, optimizing parameters or the model, paper trading, and live trading.

  • 00:10:00 The speaker discusses the initial steps of algorithmic trading, starting with brainstorming ideas and finalizing a trading logic idea. They mention the need to identify the frequency of trades, which segment to use for trading, and back-testing periods. The speaker then presents data on the Gross Domestic Production (GDP) of India's various sectors to demonstrate the difficulty in understanding data for trading strategies. They convert the data into a graphical representation to give a better understanding and suggest looking at correlations to price. The speaker then presents visual representations of agricultural data over time to demonstrate how data can be interpreted differently and the importance of analyzing it in multiple ways.

  • 00:15:00 The speaker discusses the resources needed for algorithmic trading in commodity markets. He divides trading strategies into two main areas, which include arbitrage and momentum, with techniques such as pair trading, correlation, moving averages and probability distribution. One of the most important aspects of algorithmic trading is infrastructure, including connecting to a broker through API and hosting the algorithm in the cloud or on-premise. Additionally, tools such as Excel, Tableau, Power BI, and TradingView can be used for data visualization and applying technical indicators.

  • 00:20:00 The speaker discusses various tools and platforms that can be used for algorithmic trading in commodity markets. For non-programmers or semi-programmers, Metatrader and Interactive Brokers are popular options, while Python is the leading programming language for pure programming tools. In particular, Python-based algorithmic trading platforms such as Quantopian, Blueshift, QuanTX, and Zerodha are discussed in detail. The speaker also highlights popular data processing and back-testing libraries such as Pandas, Numpy, Beautifulsoup, and Backtrader, and sentiment analysis libraries such as Stream Python, Feedparser, Peopie, and NLP.

  • 00:25:00 The speaker explains how to come up with a trading idea and design a model using an example of agricultural commodities, which are relatively less volatile than equities and Forex. The idea is to apply a mean reversion strategy using an indicator called Bollinger Bands, set at two standard deviations from the mean price range. The filtering criteria include selecting a liquid commodity with a volume of at least 1080, for which the speaker suggests trading Jana in the NCDX. The model can be visualized using investing.com by drawing the Bollinger Bands, with different levels indicating the buy and sell points.

  • 00:30:00 The speaker discusses the process of backtesting an algorithmic trading model in commodity markets. Backtesting is important to verify the logic using historical data and avoid losing money if the model performs poorly in the live environment. Various parameters such as returns, maximum drawdown, maximum profit, and stop loss must be considered during backtesting. The speaker also explains the steps involved in backtesting, including downloading data from an open portal, importing libraries, writing supporting functions, generating buy and sell signals, visualizing the output, and evaluating the return generated by the strategy. The speaker also mentions using their own backtesting functions instead of libraries from Github.

  • 00:35:00 The speaker explains the different parameters that a function takes in to return buy and sell signals based on data frames, strategy types, entry and exit criteria, and positional feed. The function allows traders to configure the open or closed price for their calculations, as well as stop loss and target percentages. The speaker also discusses a function that generates statistical reports and a function that creates levels using standard deviation for a chosen indicator. Lastly, the main function invokes the other functions to return buy and sell signals based on the chosen strategy and generate a summary.

  • 00:40:00 The speaker shows how to generate trading backtesting reports using BV practice positional skill. The output includes a data frame with all the trades, transaction charges, and slip edges. The backtesting function is invoked and the reports generated. The reports include statistics and graphical representations of the output which show the returns percentage, transaction, and cumulative returns over a period of time. The speaker analyzes the report and advises to set stop-loss at around -1.5 to avoid incurring losses of -2% and -3%. The maximum profit obtained was 8% which means the stop-loss can be set at 8% or 9% maximum.

  • 00:45:00 The speaker discusses the process of optimizing an algorithm. They explain that one way to optimize an algorithm is to create another algorithm that will run the original algorithm multiple times using different sets of parameters. The speaker provides an example of this process in which they optimize the look-back period for a rolling back period. They create a list of various values for the look-back period and use a combination function to create a comprehensive list of all the parameter sets. The speaker emphasizes the importance of optimizing algorithms to improve their performance in commodity markets.

  • 00:50:00 The speaker discusses the process of using three lists to run each pair through the elbow method with different parameters to check the backtesting results, storing them in a data frame called DF optimizer. They check for the combination that retains the maximum returns and store the optimized variables in the optimized role. The optimization process must be careful not to overfit the data. They emphasize the importance of running the same parameters on the next period to ensure the optimization parameters are correct. Finally, the speaker downloads the report to see the results.

  • 00:55:00 The speaker goes over the code used to optimize the trading parameters and the statistics it produced, including the returns, mean returns, maximum drawdown, and win-loss ratio. The optimized parameters resulted in a return of 22.8%, a significant improvement over the previous combination's 9%. They emphasize the importance of paper trading to test the algorithm without investing money and the need for diversification, portfolio and risk management when going live. They also note that the development process of algorithmic trading is similar to software product development lifecycle.

  • 01:00:00 The speaker explains how the stages of algorithmic trading can be compared to those of software development, with formulating the trading strategy being similar to the planning and requirement stage, and paper trading and simulation trading mapping to Quality Assurance. The speaker emphasizes that all stages are important and failing to properly execute any one of them can lead to the failure of the entire project.
Algorithmic Trading in Commodity Markets
Algorithmic Trading in Commodity Markets
  • 2020.02.14
  • www.youtube.com
In this webinar "Algorithmic Trading in Commodity Markets", presented by Sunil Guglani, AVP, NCDEX, we go through the following concepts:- Stages of Algorith...
 

Predict Trends In Stock Markets Using AI And Python Programming



Predict Trends In Stock Markets Using AI And Python Programming

This webinar session offers a hands-on learning tutorial focused on predicting trends using AI in the stock market. Participants will actively engage in creating a classification tree model using a Jupyter Notebook. The primary objective is to develop a classification tree that can serve as a tool for establishing trading rules based on the anticipated positive or negative future returns.

Utilizing a decision tree model in trading is an essential machine learning technique that provides an immersive and interactive learning experience. During the session, attendees will have the opportunity to work directly on a Python notebook alongside an instructor.

The webinar aims to cover the following key areas:

  • Gaining an understanding of the underlying concepts and intuition behind various indicators and learning their practical application
  • Working with data from the US equities markets to generate essential trading indicators

The recorded session delves into how the decision tree model can be leveraged in trading to extract valuable trading rules. These rules serve as a foundation for making informed decisions on when to buy or sell securities.

Throughout the video, participants will acquire knowledge on:

  • Utilizing artificial intelligence (AI) and the Python programming language to predict trends in the stock markets
  • Effectively visualizing data to gain insights
  • Constructing trading rules based on future returns using a decision tree model
  • Understanding predictor variables and target variables, comprehending the rationale behind each technical indicator, and effectively implementing them
  • Exploring a range of trading indicators
  • Applying the concepts learned to real-world data from the US equities markets to develop the necessary trading indicators

To fully benefit from this webinar, attendees should possess:

  • Technical knowledge related to AI and machine learning
  • Prior experience in trading
  • A solid understanding of the stock market and its dynamics

Regarding variables, the predictor variables in this context refer to the technical indicators employed to predict market trends. On the other hand, the target variable signifies the expected trend for the following day, specifically whether it will be positive or negative.

Predict Trends In Stock Markets Using AI And Python Programming
Predict Trends In Stock Markets Using AI And Python Programming
  • 2019.09.06
  • www.youtube.com
This session is a hands-on learning tutorial to Predict Trends using AI in the stock market, where you will work directly on a Jupyter Notebook to create a c...
 

Quantitative Portfolio Management Strategies By Prodipta Ghosh - July 23, 2019



Quantitative Portfolio Management Strategies By Prodipta Ghosh - July 23, 2019

Prodipta Ghosh, Vice-President of Quantitative Portfolio Management, emphasizes that there is no one-size-fits-all strategy for stock trading due to the presence of uncertainties in financial markets, the dynamic nature of the market over time, and the varying goals and risk appetites of individuals. He highlights that even with a perfect vision or model of the world, it would be impossible to provide answers to traders' questions as each person operates within a unique context. Therefore, no perfect strategy exists for anyone in the world.

During his presentation, Prodipta Ghosh delves into four quantitative portfolio management strategies. These strategies include utilizing Bollinger Bands, employing a simple moving average crossover strategy, analyzing the doji candlestick pattern, and incorporating the Relative Strength Index (RSI). While a high Sharpe ratio may theoretically suggest the best strategy, past performance cannot always guarantee future results. Hence, it is crucial to construct a portfolio that encompasses diverse strategies and assets to mitigate risk and avoid significant drawdowns. Ghosh demonstrates the benefits of equally allocating capital to all four strategies, showcasing how a diversified portfolio can withstand market volatility and prevent substantial losses.

Prodipta Ghosh provides an explanation of the fundamentals of portfolio management and distinguishes it from investing in a single stock. Portfolio management entails developing a strategy for multiple strategies or assets, taking into account risks, uncertainties, the passage of time, and specific contexts. The value of a strategy is derived from the underlying returns multiplied by positions, while the portfolio value is determined by the weighted stream of underlying returns. To optimize portfolio management, a mathematical problem is solved by defining a function U that is dependent on the portfolio value P, and finding the weights W that maximize U. Different optimization strategies, such as mean-variance optimization, Kelly optimization, and risk penalty optimization, can be employed based on how U is defined and the optimization approach.

The speaker proceeds to discuss quantitative portfolio management strategies and the role of optimization problems in the process. He explores the various constraints that can be specified in an optimization problem, such as limiting the range of a portfolio, and the types of portfolios that can be constructed, including those based on alpha strategies, factor portfolios, or collections of individual stocks. The objective is to define a maximization condition that results in a portfolio with maximum value or function of portfolio value. Additionally, the speaker addresses the question of whether an equally weighted portfolio is reasonable, which depends on specific circumstances and can be viewed as an optimization problem with a penalty on the square of errors.

Prodipta Ghosh delves into the concept of risk and utility in portfolio management, highlighting the challenges in estimating expected returns and risks. He introduces modern portfolio theory and quadratic utility as approaches to maximize returns while minimizing risk. The speaker employs the example of the Saint Pittsburgh paradox to illustrate how human decision-making may deviate from mathematical averages.

The relationship between utility and risk is explained by Prodipta Ghosh, who emphasizes their significance in constructing a sound portfolio. He demonstrates the concept of risk premium, which quantifies the difference between the expected payout or return from a risky investment and the amount an individual is willing to accept for a certain payment. Additionally, he explains that a utility function is a mathematical representation of wealth that informs how much an extra dollar is valued, aiding in determining appropriate amounts to invest. Understanding the interplay between utility and risk enables investors to develop portfolios that strike a balance between risk and return.

The speaker discusses the notion of risk aversion in investment, which suggests that investors prefer certain investments over those with fluctuating returns. Risk aversion serves as a common assumption in quantitative portfolio management, with the risk premium represented by the Greek letter Pi. This premium denotes the amount an investor is willing to pay to accept a zero-mean fluctuating return. The speaker then explains the quadratic utility function and how it leads to the optimization of a portfolio's mean and variance. Building a portfolio based on Modern Portfolio Theory involves finding a balance between the mean and variance of the portfolio.

Prodipta Ghosh proceeds to explain the process of optimizing expected portfolio utility by striking a balance between the mean and variance. He utilizes Excel to simulate returns from different assets and calculates the covariance matrix, which is then utilized to determine portfolio returns, variance, and risk based on different weightings. By varying the weights and calculating the portfolio return and variance for all possible scenarios, an optimization problem can be solved. The resulting plot showcases the Sharpe ratio, which represents the ratio of return to risk, for each set of weights.

The concept of efficient frontiers in modern portfolio theory is then introduced by Prodipta Ghosh. He describes the efficient frontier as the range where a portfolio should lie in order to achieve maximum returns based on a given risk tolerance. He further explains that the addition of a low-risk asset, such as a risk-free asset, adds an interesting dimension to the concept. The highest Sharpe ratio is identified from the tangent portfolio, which is the portfolio formed by combining the risk-free asset with the efficient frontier. The line connecting zero to the tangent portfolio is referred to as the market line, and it presents a choice between investing in the market portfolio or opting for a risk-free asset while defining the allocation.

Prodipta Ghosh delves into the Capital Asset Pricing Model (CAPM), which changes the perspective of risk in finance by measuring it as a contribution to the market portfolio rather than standalone risk. CAPM captures the required rate of return for a risky asset, calculated as the risk-free rate plus a contribution to the market portfolio in terms of risk multiplied by the difference between the market return and the risk-free return. This concept provides a theoretical foundation for value investing. Through various models, such as discounted cash flow and compression models, investors can estimate a fair price using CAPM and capitalize on a better understanding of idiosyncratic risk.

The speaker discusses various portfolio management strategies, with a specific focus on factor investing. Factor investing involves considering multiple risk factors, beyond just market risk, when constructing a portfolio. Each factor carries a premium associated with it, leading to different investing styles, including factor allocation, factor timing, or a return to value investing and stock picking. Factor investing helps explain idiosyncratic risk and provides a new interpretation of alpha and beta, where alpha and beta become the total alpha if the delta F in the equation is time-invariant and positive.

Prodipta Ghosh highlights the major differences between value investing and factor investing and considers which approach makes more sense for retail traders. He notes that value investing requires extensive research on individual companies and often entails concentration in idiosyncratic risk, which may not be suitable for small-scale retail traders. On the other hand, factor investing involves researching the market drivers of risk and systematically leveraging them to allocate investments based on expected returns. The speaker briefly touches upon the distinctions between discretionary and quantitative research, stating that quantitative management can offer more opportunities for outperformance if utilized correctly.

The speaker compares value investors and quantitative strategists, noting that while value investors have a lower probability of success, they have the potential to generate substantial returns. Quant strategists, on the other hand, have a higher probability of success but generate relatively lower yet consistent returns. The fundamental law of investment describes the information ratio as the ratio of overperformance divided by the portfolio's risk, equating it to the information coefficient or skill level multiplied by the square root of n, where n represents the number of independent bets that can be made. Quantitative investors can have a higher number of n, allowing them to optimize a factor portfolio. Ghosh also elaborates on other optimization methods such as KD optimizations or risk parity optimizations, which aim to maximize terminal wealth over multiple periods by accumulating wealth.

Prodipta Ghosh moves on to discuss the Kelly portfolio strategy, emphasizing its dominance in the long run due to its focus on maximizing final wealth. However, he cautions that the Kelly strategy is also the most aggressive in terms of risk and may not be suitable for retirees or individuals who cannot afford short-term risks. He further explains the risk parity strategy, which aims to equalize individual risk contributions and ensures that the sum of all assets' risks remains balanced. While there is no theoretical justification for this approach, it is considered a sensible allocation of risk. When deciding between the Kelly strategy, risk parity, and mean-variance optimization, one must consider their risk appetite and the accuracy of their modeling, which can be enhanced through factor modeling. Ultimately, these strategies revolve around balancing risk and return, with a strong emphasis on measuring and managing risk effectively.

Prodipta Ghosh proceeds to discuss the topic of alpha strategies and how to combine them to create a well-rounded portfolio. While mean-variance optimizations can be employed for alpha strategies, they encounter an issue where all the allocation in the portfolio goes to a single, best strategy based solely on historical data. To address this concern, Ghosh introduces the concept of in-sample strategies, where all strategies are given an equal vote. Another approach is the regret switching portfolio, which employs change analysis techniques like hidden Markov models or change point analysis to allocate capital among different alpha strategies. One notable technique is the no regret approach, which addresses the exploration versus exploitation problem by systematically exploring each alpha strategy to identify the one with the most potential before heavily investing in it.

Prodipta Ghosh highlights that there are numerous resources available for further exploration of portfolio optimization, including platforms like Wikipedia and Contra's recently launched course on quantitative portfolio management. He mentions several opportunities for learning and growth in the industry through Contra's programs, such as their interactive self-paced learning portal and Blue Shift, which offers free backtesting. Ghosh expresses his gratitude to the audience for their participation and encourages them to visit Contra's website for additional information and resources.

  • 00:00:00 Prodipta Ghosh, Vice-President of Quantitative Portfolio Management, explains that there is no one-size-fits-all strategy for stock trading as uncertainties exist in financial markets, the market changes over time and individuals have different goals and risk appetites. He points out that even with a perfect vision or model of the world, it would not be possible to come up with answers for the kind of questions asked by traders as everyone has a different context. Therefore, there is no perfect strategy for anyone in the world.

  • 00:05:00 Prodipta Ghosh discusses four quantitative portfolio management strategies, including using Bollinger Bands, a simple moving average crossover strategy, a doji candlestick pattern, and the Relative Strength Index (RSI). While in theory the best strategy is one with a high Sharpe ratio, past performance is not always a guarantee of future results. Therefore, building a portfolio of different strategies and assets is crucial to avoid large drawdowns and mitigate risk. By equally allocating capital to all four strategies, Ghosh demonstrates how a diversified portfolio can avoid significant losses in the face of market volatility.

  • 00:10:00 Prodipta Ghosh explains the basics of portfolio management and how it differs from investing in a single stock. Portfolio management involves creating a strategy for multiple strategies or assets, which is concerned with risks and uncertainties, the passage of time, and context. The value of a strategy is driven by the underlying returns multiplied by positions. On the other hand, the portfolio value is the weighted underlying stream of returns. Portfolio management solves a mathematical problem by defining a function U, which is a function of the portfolio value P, and finding the weights W that satisfy the maximization condition to maximize U. Depending on how U is defined and how the optimization is done, there can be a variety of possibilities for optimization strategies, such as mean-variance optimization, Kelly optimization, and risk penalty optimization.

  • 00:15:00 The speaker discusses quantitative portfolio management strategies and how optimization problems play into it. He talks about the types of constraints one can specify in an optimization problem, such as limiting the range of a portfolio, and the different types of portfolios that can be built, including ones based on alpha strategies, factor portfolios, or a collection of individual stocks. The goal is to define a maximization condition that results in a portfolio with a maximum value or function of portfolio value. The speaker also addresses the question of whether an equally weighted portfolio makes sense, which depends on specific circumstances and can be thought of as an optimization problem with a penalty on the square of errors.

  • 00:20:00 Prodipta Ghosh discusses the concept of risk and utility in portfolio management. He explains that while it may seem straightforward to estimate expected returns and risks, in reality, it can be quite tricky. He introduces the concept of modern portfolio theory and quadratic utility, which aim to maximize returns while minimizing risk. He also uses the example of the Saint Pittsburgh paradox to illustrate how humans may not always make decisions based on mathematical averages.

  • 00:25:00 Prodipta Ghosh explains the relationship between utility and risk, and how they lead to a good portfolio. He demonstrates the concept of risk premium, which measures the difference between expected payout or return from a risky investment and the amount a person would be willing to settle for a certain payment. He also explains that a utility function is a mathematical function of wealth that tells us how much one extra dollar is worth and helps in determining the amount to pay. By understanding the relationship between utility and risk, investors can build a good portfolio that balances risk and return.

  • 00:30:00 The speaker discusses the concept of risk aversion in investment, which means that investors prefer certain investments over fluctuating ones. Risk aversion is a common assumption in quantitative portfolio management, and the risk premium is the value of a Greek letter called Pi, which represents the amount an investor is willing to pay to accept a zero-mean fluctuating return. The speaker then explains the quadratic utility function and how it leads to the optimization of a portfolio's mean and variance. Building a portfolio based on Modern Portfolio Theory involves finding a balance between the mean and variance of the portfolio.

  • 00:35:00 Prodipta Ghosh explains the process of optimizing expected portfolio utility by balancing the mean and variance. He uses Excel to simulate returns from different assets and compute the covariance matrix, which is then used to calculate portfolio returns, variance, and risk based on different weights. By varying the weights and computing the portfolio return and variance for all possible cases, an optimization problem can be solved. The resulting plot shows the Sharpe ratio, which is the ratio of return versus risk, for each set of weights.

  • 00:40:00 Prodipta Ghosh explains the concept of efficient frontiers in modern portfolio theory. He discusses how the efficient frontier is the area where a portfolio should lie to obtain maximum returns based on given risk tolerance. He further explains that if a loan risk asset is added, such as a risk-free asset, the concept becomes more interesting, and the highest Sharpe ratio is picked out from the tangent portfolio. He also describes the line connecting zero to the tangent portfolio as the market line and explains how it becomes a choice between buying the market portfolio and buying a risk-free asset and defining the allocations.

  • 00:45:00 Prodipta Ghosh explains the Capital Asset Pricing Model (CAPM). CAPM changes the concept of risk in finance, measuring the risk as a contribution to the market portfolio rather than a standalone risk. Through the use of mathematical equations, CAPM captures the required rate of return for a risky asset, which is the risk-free rate plus a contribution to the market portfolio in terms of risk multiplied by the difference in the market return and the risk-free return. This concept provides a theoretical basis for value investing. Through various models, including discounted cash flow and compression models, investors estimate a fair price using CAPM and capitalize on their better understanding of idiosyncratic risk.

  • 00:50:00 Prodipta Ghosh discusses various portfolio management strategies, focusing on factor investing. Factor investing involves considering multiple risk factors rather than just market risk in creating a portfolio. Ghosh explains that each factor has a premium associated with it and this leads to different investing styles, including factor allocation, factor timing, or simply going back to value investing and stock picking if everything is unpacked. Factor investing helps explain the idiosyncratic risk and provides a new interpretation of alpha and beta, with alpha and beta becoming the total alpha if the delta F in the equation is time-invariant and positive along with beta.

  • 00:55:00 Prodipta Ghosh discusses the major differences between value investing and factor investing, and which one makes more sense for a retail trader. Ghosh notes that value investing requires a high level of research on individual companies and usually involves a concentration in idiosyncratic risk, which may not be suitable for small-scale retail traders. On the other hand, factor investing involves researching the market drivers of risk and systematically probing them for factors to allocate investments based on the expected return. Ghosh also briefly touches upon the differences between discretionary and quantitative research, stating that quantitative management can offer more opportunities for outperformance if used correctly.

  • 01:00:00 Prodipta Ghosh explains the difference between value investors and quantitative strategists. While value investors have a low probability of success but could generate multi-baggers, quant strategists have a high probability of success but generate relatively lower yet consistent returns. The fundamental law of investment describes the information ratio as a ratio of overperformance divided by the portfolio's risk, which equates to information coefficient or the skill level multiplied by the square root of n, where n is the number of independent bets that can be taken. As a result, a quantitative investor can have a higher number of n, and this is why they can optimize a factory portfolio. Besides, Ghosh explains other optimization methods like KD optimizations or risk parity optimizations, which try to maximize terminal wealth over multi-periods by accumulating wealth.

  • 01:05:00 The speaker explains the Kelly portfolio strategy and its dominance in the long run due to its focus on maximizing final wealth. However, it is also the most aggressive in terms of risk, meaning that it is not suitable for retirees or people who cannot afford short-term risks. The speaker also discusses the risk parity strategy that equates individual risk contributions and demands that the sum of all assets' risk should be equal. There is no theoretical justification for this, but it is considered a sensible allocation of risk. When deciding between Kelly, risk parity, and mean-variance optimization, one should consider their risk appetite and the accuracy of their modeling, which can be improved using factor modeling. These strategies are about balancing risk and returns, with a focus on measuring and managing risk being more critical.

  • 01:10:00 The speaker discusses the topic of alpha strategies and how to combine them to create a good portfolio. While mean-variance optimizations can be used for alpha strategies, it has an issue where all the allocation in the portfolio goes to a single, best strategy, which is just based on historical data. One way to address this issue is to use in-sample strategies, where all the strategies have an equal vote. Another approach is the regret switching portfolio, which uses change analysis techniques such as hidden Markov models or change point analysis to allocate capital between different alpha strategies. One particular technique is called no regret, which is an exploration versus exploitation problem, where the objective is to explore each alpha strategy and figure out which one has the most potential before going heavily into it.

  • 01:15:00 The speaker discusses various approaches to portfolio optimization, including the use of exponential weighing and a learning rate to balance exploitation and exploration. He also mentions that there are many resources available on the topic, including Wikipedia and Contra's recently launched course on quantitative portfolio management. Additionally, the speaker talks about several opportunities for learning and growth in the industry through Contra's programs, including their interactive self-paced learning portal and blue shift, which offers free backtesting. He wraps up by thanking the audience for their participation and encouraging them to visit Contra's website for more information.
 

Algorithmic Trading | Is It Right for You & How To Get Started



Algorithmic Trading | Is It Right for You & How To Get Started

Ladies and gentlemen, I would like to introduce Nathan, the co-founder of Elle Foam Advisory, who will be sharing valuable insights on the fascinating world of algorithmic trading. Nathan begins his presentation by defining algorithmic trading and highlighting its significance in the financial industry. He explains that algorithmic trading involves the use of computer algorithms to execute trades automatically, and it plays a crucial role in modern-day markets.

Nathan goes on to discuss the evolving nature of algorithmic trading and how its definition can vary based on geographical location and regulatory frameworks. In the United States, any form of systematic trading falls under the umbrella of algorithmic trading. However, in other regions, it is specifically considered algorithmic trading when computer algorithms autonomously determine order parameters. This distinction emphasizes the diverse approaches and perspectives within the field.

The speaker then proceeds to shed light on the current industry trends in algorithmic trading. He highlights the increasing prevalence of DIY (Do-It-Yourself) traders who utilize algorithmic strategies. Furthermore, Nathan presents data that demonstrates the significant market share growth of algorithmic trading in Asia, the United States, and India. Despite this growth, he acknowledges that retail participation in algorithmic trading remains relatively low and promises to explain this phenomenon in upcoming slides.

Moving forward, Nathan explores the impact of algorithmic trading on the job market. He explains how automation is replacing human traders, and firms are now seeking coders to develop sophisticated trading strategies and harness the power of machines. The speaker emphasizes four key advantages of machine trading over human trading: uptime, reaction time, scalability, and the ability to learn and improve. Machines can continuously monitor risks, execute trades promptly, adapt to market changes efficiently, and learn from their experiences more effectively than human traders.

Addressing the low retail participation in algorithmic trading, Nathan outlines several reasons for this discrepancy. Firstly, algorithmic trading requires a combination of technical knowledge, including coding and statistics, with a solid understanding of finance and market dynamics. Secondly, access to relevant market data is crucial for backtesting and developing robust strategies. Lastly, transitioning from manual trading to algorithmic trading can be challenging without guidance from experienced market practitioners who possess practical expertise in the field. Despite these obstacles, Nathan highlights the undeniable benefits of algorithmic trading, such as scalability, effective risk management, and the elimination of human error, making it an attractive option for traders.

Nathan then introduces the audience to the EPAct course offered by Point Density. He discusses the difficulty of finding a platform that provides comprehensive support for algorithmic trading, encompassing guidance from market practitioners, technical knowledge, and up-to-date content. The EPAct course aims to bridge this gap by offering rich content created by industry professionals that is continuously updated to reflect the latest trends. The course also provides dedicated support from the faculty and adopts a market-oriented approach, making it an ideal resource for both beginners venturing into algorithmic trading and those looking to advance their careers in this field.

Further elaborating on the course content, Nathan outlines the modules covered in the algorithmic trading program. The course begins with a primer module that establishes a foundation with basic statistics, probability theory, and the application of financial models. It then progresses to cover Python basics and advanced statistics, including Gaussian models used in understanding complex strategies. The course also includes sessions on resume building, setting up a personal trading desk, and conducting mock interviews for placements with over 100 partnered companies. Throughout the course, the instructor provides personal assistance to students, ensuring that any questions or difficulties are promptly addressed. Additionally, joining the EPAct course grants exclusive benefits, including access to community events and features, which will be further discussed in upcoming sections.

Continuing his presentation, Nathan dives into the details of each module within the algorithmic trading course. The course commences with the building blocks module, setting the foundation for understanding equity effects and future strategies. Students engage in hands-on exercises to create various trading strategies. The program then delves into market microstructure and implementations, exploring the intricacies of backtesting ideas on historical data using different APIs and brokers. Machine learning is also introduced as an emerging field within algorithmic trading. The importance of trading and front operations is emphasized, with a dedicated module focusing on setting up algorithmic trading infrastructure. The course also covers options trading, portfolio optimization, and risk management. Finally, students undertake a project and, upon successfully passing the exam, receive a verified certificate, validating their expertise in algorithmic trading.

Nathan then shifts the audience's attention to the Algorithmic Trading program offered by QuantInsti. He highlights that upon completion of the program, participants receive a verified impact certificate after completing a comprehensive 300+ hours course. The faculty includes renowned professionals in the industry who are approachable and provide hands-on experience in different asset classes and roles. The course covers various aspects ranging from CV preparation to providing access to APIs and broker networks for seamless implementation. Furthermore, the QuantInsti team assists participants with fundraising opportunities, making it an ideal choice for those seeking a comprehensive education in algorithmic trading.

Following Nathan's discussion, Nadine takes the stage to enlighten the audience about the benefits of being part of the EPAT community. She emphasizes the lifelong guidance available to community members, as well as the opportunity to connect with fellow students from over 165 countries. Exclusive events and sessions, free and subsidized access to brokers, and access to backtesting tools like BlueShift are among the privileges of the community. Furthermore, EPAT adds a fundamental quantitative dimension to an individual's existing skill set, enhancing their professional profile. Notably, the EPAT program is recognized under the financial training scheme, and working professionals in Singapore can benefit from a reimbursement of 2,000 Singaporean dollars.

Concluding the presentation, Ben Magnano shares his personal journey in algorithmic trading. He recounts his early struggles with day trading in 2005 until he found QuantInsti, where he received rigorous training in quantitative and algorithmic trading fundamentals. Ben highlights the importance of learning Python and being able to write his own programs, eventually earning his certificate as a quantitative trader. This achievement opened doors for him, leading to an opportunity as a research consultant at WorldQuant, where he continues to refine his coding skills and stay updated with the latest industry trends, such as artificial intelligence.

In the final moments of the video, the speaker acknowledges the tremendous growth in algorithmic trading and how it is increasingly preferred by traders who seek to minimize the need for constant monitoring. The speaker expresses gratitude for the exceptional analysis provided by the presenters, recognizing the valuable insights shared throughout the presentation. As the video concludes, the speaker summarizes the ePAD program, designed to equip participants with industry-ready skills in the quantitative and FinTech domain, ensuring they are well-prepared to thrive in the field of algorithmic trading.

  • 00:00:00 In this section, the host introduces Nathan, the co-founder of Elle Foam Advisory, who will be discussing algorithmic trading. Nathan starts by defining what algorithmic trading is and why it's important. He also talks about the industry trends and the career prospects in algorithmic trading. Finally, Nathan discusses how the Quantity program can help individuals get started or build a career in algorithmic trading. He concludes by assuring the audience that he will answer their questions throughout the presentation.

  • 00:05:00 In this section, the speaker discusses the definition of algorithmic trading and how it differs based on geography and regulations. In the US, any systematic trading is considered algorithmic trading, while in other regions, it may only be considered algorithmic if a computer is automatically determining order parameters. The speaker notes that algorithmic trading is evolving quickly and has contributed to a rise in trading conducted by DIY traders. The speaker also presents data that shows the market share for algorithmic trading has grown significantly in Asia, the US, and India, but retail participation in algorithmic trading remains low. The speaker promises to explain why this is the case in upcoming slides.

  • 00:10:00 In this section, the speaker discusses the rise of algorithmic trading and how it is impacting the job market. He explains how algorithmic trading is getting automated and replacing human traders, and how firms are now hiring coders to develop their strategies and let machines trade. The speaker highlights four main advantages of machine trading over human trading, such as uptime, reaction time, scalability, and the ability to learn and improve. He argues that machines can monitor risk, take trades, and react to market changes faster and more efficiently than human traders.

  • 00:15:00 In this section, the speaker discusses the reasons why retail participation in algorithmic trading is still low despite its increasing popularity and benefits. Firstly, algorithmic trading requires technical knowledge such as coding and statistics in addition to finance and market understanding. Secondly, accessing relevant market data for backtesting and developing strategies is crucial. Lastly, transitioning from manual trading to algorithmic trading can be difficult without guidance from a market practitioner who has experience in the field. Nonetheless, the benefits of algorithmic trading such as scalability, risk management, and elimination of human error make it an attractive option for traders.

  • 00:20:00 In this section, the speaker discusses the benefits of using Point Densitiy's EPAct course for those interested in algorithmic trading. They highlight the difficulty of finding a platform that combines the necessary components for algorithmic trading, such as guidance from market practitioners, technical knowledge, and updated content. The EPAct course aims to bridge this gap by providing rich content built by market practitioners and constantly updated to reflect current trends. In addition, the course offers dedicated support from the faculty and a market-oriented approach, making it a great resource for those starting with algorithmic trading or looking to build a career in the field.

  • 00:25:00 In this section, the speaker discusses a course on algorithmic trading and the content covered in the course. The course starts with a primer module where students from various backgrounds can create a foundation with basic statistics, probability theory, and application of financial models. The course moves on to Python basics and advanced statistics such as Gaussian models, which are used to understand more complex strategies. The course also includes resume building, setting up your own trading desk, and conducting mock interviews for placements with over 100 partnered companies. The course instructor personally helps students with any questions or difficulties, ensuring that doubts are cleared. The speaker also mentions exclusive benefits of joining the global algorithmic trading community, such as community events and features, which will be discussed in later sections.

  • 00:30:00 In this section, the speaker discusses the different modules that comprise the algorithmic trading course. The course begins with building blocks and moves on to equity effects and future strategies, where students will work on creating different strategies in a hands-on environment. Market microstructure and implementations are also discussed, followed by a module on back-testing ideas on historical data using different APIs and brokers. Machine learning is also explored in a new field. Trading and front operations are emphasized as important, with a module covering how algorithmic trading infrastructure is set up. The course also includes a module on options trading, portfolio optimization, and risk management. Finally, students work on a project, and upon passing the exam, receive a verified certificate.

  • 00:35:00 In this section, the speaker discusses the Algorithmic Trading program offered by QuantInsti, which provides a verified impact certificate after completing a 300+ hours course. The faculty includes well-known names in the industry, who are approachable and provide hands-on experience in different asset classes and roles. The course covers everything from CV preparation to access to APIs and broker networks for easy implementation. Additionally, the QuantInsti team assists with fundraising, making it an ideal course for those interested in learning algorithmic trading.

  • 00:40:00 In this section, Nadine discusses the benefits of being a part of the EPAT community which includes lifelong guidance, the ability to connect with students from over 165 countries, exclusive events and sessions, free and subsidized access to brokers, access to backtesting tools such as BlueShift, and lifelong access to the most updated content. It also adds a fundamental quantitative dimension to your existing skill set. EPAT is recognized under the financial training scheme and provides a benefit of 2,000 Singaporean dollars as reimbursement for working professionals in Singapore.

  • 00:45:00 In this section, Ben Magnano discusses his journey with algorithmic trading, starting in 2005 when he was struggling with day trading. He eventually found QuantInsti, where he was introduced to the basics and fundamentals of quantitative and algorithmic trading through rigorous training and teachings. He learned python and was able to write his own program, later receiving his certificate as a quantitative trader. This led to an opportunity at WorldQuant as a research consultant, and he is still working with them today, always looking to improve his coding style and stay up-to-date with the latest industry trends, such as artificial intelligence.

  • 00:50:00 In this section, the speaker talks about the tremendous growth in the field of algorithmic trading and how it is increasingly becoming the preferred method for traders who do not want to spend their day babysitting their trades. The speaker also gives credit to quantitative analysts for bringing this vision to reality and expresses his gratitude for the excellent analysis provided by the presenters. The video ends with a brief summary of ePAD, a program designed to make participants industry-ready in the quant and FinTech domain.
Algorithmic Trading | Is It Right for You & How To Get Started
Algorithmic Trading | Is It Right for You & How To Get Started
  • 2019.06.26
  • www.youtube.com
Nitin Aggarwal is the Co-founder of Alphom Advisory, which focuses on High Frequency Trading Strategies. He was also one of the key members of iRage Options ...
 

Risk Models For Quant Trading By Zura Kakushadze - May 16, 2019



Risk Models For Quant Trading By Zura Kakushadze - May 16, 2019

Zura Kakushadze, in his discussion, focuses on the challenges associated with calculating the inverse of the covariance matrix for optimizing portfolios of 2,000 US stocks. He highlights that when the number of observations in the time series of returns is smaller than the number of stocks in the portfolio, the sample covariance matrix becomes singular and cannot be inverted. Even if it were non-singular, the off-diagonal elements representing correlations would be highly unstable out-of-sample unless there is a significantly greater number of observations compared to the stocks, which is typically not the case in real-life applications.

Kakushadze explains that risk models for quantitative trading strategies differ from traditional risk models due to shorter holding periods and ephemeral alphas. Long look-back periods are not desirable for these strategies, and alternative methods for calculating the covariance matrix are required. One common approach is to use a factor model that decomposes risk into factor risk and specific risk. The advantage of the factor model is that it represents the large covariance matrix by a much smaller factor covariance matrix, making it computationally efficient. However, Kakushadze points out that there are still intricate details that need to be addressed in the factor model.

The speaker further discusses the challenges associated with calculating volatility for each stock and suggests focusing on the sample correlation matrix rather than the sample covariance matrix. The sample correlation matrix is preferred due to issues such as singularity, instability, and other concerns associated with the covariance matrix. Kakushadze proposes factoring out skewed variances and using a factor model for the correlation matrix instead of the covariance matrix. The question of determining the risk factors arises, and two possibilities are suggested: using principal components of the sample correlation matrix or employing style factors such as size, momentum, and volatility.

Different types of risk factors suitable for quantitative trading are explored, including style factors and industry classifications. The speaker highlights the importance of using short horizon factors that are relevant for trading and excluding longer horizon factors. The risk of inadvertently neutralizing desirable alpha factors in the risk model is also discussed, emphasizing the need for careful selection and weighting of risk factors.

Kakushadze explains that standardized risk models purchased from vendors are incapable of removing undesirable risk factors or covering all the relevant directions of a trader's risk space. Therefore, the speaker suggests building a custom risk model from scratch. One approach is to use statistical risk models, which involve taking a time series of returns with a limited lookback period and creating factor loadings based on the principal components of the sample correlation matrix.

The concept of effective rank is introduced as a way to determine the number of principal components to use as risk factors. Effective rank measures the effective dimensionality of a matrix and can be calculated using spectral entropy. However, statistical risk models have limitations in terms of the number of risk factors, as it is constrained by the number of observations, resulting in limited coverage of the risk space. The instability of higher principal components out-of-sample is also a concern.

The instability of out-of-sample pairwise correlations and off-diagonal elements in the correlation matrix is discussed. Kakushadze explains that higher principal components calculated from an unstable correlation matrix are frequently updated and unstable, while the first principal component tends to be relatively stable. The speaker also delves into defining style factors suitable for shorter holding strategies and suggests dropping statistically insignificant correlations, such as shares outstanding, from intraday trading strategies.

Four common factors used in short horizon quant trading models are discussed: direction (momentum), volatility, liquidity, and price. Kakushadze explains how each factor is defined and how factor returns can be calculated using cross-sectional regression. The calculation of the annualized Sharpe ratio for each factor return is emphasized in determining their statistical relevance and suitability for trading strategies.

The speaker moves on to testing and verifying factor loadings and the effectiveness of style factors in risk modeling. Backtesting on intraday trades or shorter alpha trades on residuals after factoring out historical returns using the factor loadings is suggested as one way of testing factor loadings. The value of big sectors compared to style factors is highlighted, even at the least granular level. Constructing risk models based on industries or sub-industries using fundamental industry classifications is recommended as they cover a larger portion of the risk space. The stability of the first principal component out-of-sample affects the effectiveness of these risk models.

The construction of a factor loadings matrix for a large number of sub-industries is discussed, and hierarchical industry classifications are proposed as a solution. This approach involves modeling sub-industries first and then using the next granular level of industries to model the risk factors, continuing until the problem is reduced to a smaller matrix that can be properly calculated.

The process of reducing problems step-by-step to calculate risk models for quant trading is explained. By initially calculating a factor loadings matrix of a smaller size, such as 10 by 10, to the sample covariance matrix, Kakushadze constructs a one-factor model for the remaining factor, which is the market. This reduces the problem from a large matrix to a smaller one. Including style factors in this construction is suggested, but their contribution may be limited compared to a larger number of risk factors from various industries. Style factors may not be ideal proxies for modeling correlations between stocks.

The importance of including an intercept in the normalization process of style factors is explained. The speaker clarifies that the log of the price, typically used as a style factor, is actually the log of the price divided by a normalization factor. The normalization factor is empirical and can be customized based on the trader's preference. While industry-based factors tend to be reliable proxies for modeling correlations, bilinear combinations of style factors are considered poor proxies. Therefore, traders are advised to focus on industry-based factors and customize their models according to their trading style and quantitative trading alphas.

The speaker introduces the concept of heterosis, which combines powerful ideas such as factor models, industry classifications, and principal components into a construction that can be highly effective in risk modeling. Clustering techniques are also discussed as a way to construct risk factors using multi-level clustering schemes that can replace fundamental industry classifications. However, non-deterministic clustering algorithms may produce different clusterings each time they are run, leading to noise in the system. To reduce noise, a large number of clusterings can be averaged or other techniques like dimensionality reduction or principal component analysis can be employed.

Different approaches for clustering in quant trading risk models are explored. The speaker explains that while k-means clustering may be non-deterministic, deterministic alternatives such as hierarchical clustering can be subjective and slower. The speaker suggests using risk models themselves for aggregation instead of relying solely on clustering. In the case of k-means, the non-deterministic nature arises from the initialization of cluster centers, but finding the global minimum is not always necessary. To improve upon the naive approach of using historical returns, normalizing returns against historical volatilities is proposed.

Cluster normalization and multi-level clustering are discussed for quant trading. Clustering is recommended to be done by dividing returns by variance instead of normalizing returns with two standard deviations for optimizing portfolios and improving performance. Two approaches for multi-level clustering are presented: bottom-up, where the most granular level is created first, followed by clustering clusters successively, and top-down, where the least granular level is created first, followed by clustering tickers successively. Non-deterministic algorithms like hierarchical algorithms are not advantageous in terms of performance compared to deterministic algorithms, and the speaker suggests using clustering and aggregation techniques.

The speaker addresses the issue of determining the number of clusters in clustering-based risk models. Traditional methods such as the elbow method or silhouette analysis are mentioned, but they may not always provide reliable results. Instead, the speaker suggests using stability analysis, which involves creating multiple clustering solutions and measuring the stability of the resulting clusters. The stability can be assessed using techniques such as cluster-pair stability or bootstrap stability.

Kakushadze emphasizes the importance of stability in clustering-based risk models, as unstable clusters can lead to unreliable risk estimates. He suggests that stable clusters should be used for risk modeling, while unstable clusters should be discarded or combined with other clusters to improve stability. The speaker also mentions the use of machine learning techniques, such as hierarchical clustering using machine learning algorithms, as an alternative to traditional clustering methods.

The discussion then moves on to the construction of risk models based on the selected clusters. The speaker proposes using the sample correlation matrix within each cluster to estimate the factor loadings. By decomposing the sample correlation matrix of each cluster into its eigenvalues and eigenvectors, the factor loadings can be obtained. The factor loadings matrix for the entire portfolio can then be constructed by combining the factor loadings from each cluster.

The speaker highlights the importance of properly normalizing the factor loadings to ensure that they represent risk contributions. He suggests using the inverse of the eigenvalues as weights for the factor loadings to achieve risk parity. This ensures that each stock contributes equally to the overall portfolio risk. The risk model can be further enhanced by including additional factors such as style factors or industry-based factors.

Zura Kakushadze discusses the challenges and approaches in constructing risk models for quantitative trading strategies. He emphasizes the importance of addressing issues such as singularity and instability in the covariance matrix, as well as selecting appropriate risk factors and clustering techniques. By combining factor models, industry classifications, and clustering, traders can build custom risk models that effectively capture the risk characteristics of their portfolios.

  • 00:00:00 The presenter discusses the issue of calculating the inverse of the covariance matrix in order to optimize a portfolio of 2,000 US stocks, using techniques such as mean variance optimization or Sharpe ratio maximization. He explains that if the number of observations in the time series of returns is smaller than the number of stocks in the portfolio, the sample covariance matrix will be singular and cannot be inverted. Even if it were non-singular, the off-diagonal elements (representing correlations) would be highly unstable out-of-sample unless the number of observations is much greater than the number of stocks in the portfolio, which is typically never the case in real-life applications.

  • 00:05:00 Zura Kakushadze discusses how risk models for quantitative trading strategies differ from traditional risk models. With shorter holding periods and ephemeral alphas, long look-back periods are not desirable for these strategies, and a replacement for the sample covariance matrix is needed. This is typically done through a factor model, which decomposes risk into factor risk and specific risk. The factor model has the advantage of modeling the large matrix by a much smaller factor covariance matrix, making it computationally efficient. However, there are still devilish details that need to be addressed.

  • 00:10:00 Zura Kakushadze discusses the issues with calculating the volatility (Sigma) for each stock and explains that the sample correlation matrix, rather than the sample covariance matrix, should be the focus of modeling due to its singularity, instability, and other issues. He suggests factoring out skewed variances and modeling via a factor model for the correlation matrix, not the covariance matrix. The question of what the risk factors should be arises, and he suggests two possibilities: using some of the sample correlation matrix's principal components or using the so-called style factors, which are the measured properties of stocks such as size, momentum, volatility, etc.

  • 00:15:00 Zura Kakushadze discusses different types of risk factors that can be used in quantitative trading, including style factors and industry classifications. He highlights the issue of using longer horizon style factors in short horizon trading, as they can create noise in optimization and generate additional trades with no alpha behind them. It is important to focus on short horizon factors that are relevant for trading and exclude longer horizon factors. Another issue is inadvertent alpha neutralization, where a factor in the risk model that is desirable to be long can inadvertently be neutralized, so careful consideration must be given to the selection and weighting of risk factors.

  • 00:20:00 The speaker explains how optimized risk models can neutralize the desirable alpha factor that quantitative traders want to be long on. Standardized risk models that are purchased from vendors are incapable of removing undesirable risk factors from the factor model or the covariance matrix and cannot cover the relevant directions of the trader's risk space. Therefore, the speaker suggests that a custom risk model should be built from scratch. One way to build a custom risk model is to use statistical risk models, which involves taking a time series of returns with a limited lookback period and creating factor loadings based on the first K principal components of the sample correlation matrix.

  • 00:25:00 Zura Kakushadze discusses the effective rank as a way to determine the number of principal components to use as risk factors in a factor loadings matrix. Effective rank is defined as the effective dimensionality of a matrix and can be calculated using spectral entropy to determine the effective dimensionality of a sample correlation matrix. The limitation of using statistical risk models is that the number of risk factors is limited by the number of observations, resulting in a relatively small portion of the risk space being covered. The instability out-of-sample is also a concern with higher principal components of the sample correlation matrix.

  • 00:30:00 Zura Kakushadze talks about the instability of out-of-sample pairwise correlations, and how it relates to the instability of the off-diagonal elements in the correlation matrix. He explains that the higher principal components that are calculated out of this unstable correlation matrix update frequently and are unstable, while the first principal component is relatively stable. Kakushadze also discusses how to define style factors that are relevant for shorter holding strategies, such as market cap and log of price, and how shares outstanding can be dropped as it has statistically insignificant correlations with the alpha in intraday trading strategies.

  • 00:35:00 Zura Kakushadze discusses four common factors that are used in short horizon quant trading models: direction (momentum), volatility, liquidity, and price. He explains how to define each of these factors and how to calculate their factor returns using cross-sectional regression. Kakushadze also emphasizes the importance of calculating the annualized Sharpe ratio for each factor return in determining their statistical relevance and suitability for betting in a trading strategy.

  • 00:40:00 The speaker discusses ways of testing and verifying factor loadings and the effectiveness of style factors in risk modeling. One way of testing factor loadings is to run backtests on intraday trades or shorter alpha trades on the residuals after factoring out the historical returns using the factor loadings. The speaker also presents data from backtests, emphasizing the value of big sectors as compared to style factors, even at the least granular level. The speaker then suggests constructing risk models based on industries or sub-industries using fundamental industry classifications such as Bix or GICS, as they cover a larger chunk of the risk space than style factors. The effectiveness of these risk models depends on the stability of the first principal component out-of-sample.

  • 00:45:00 Zura Kakushadze discusses the construction of a factor loadings matrix and the challenge of calculating it properly for a large number of sub-industries. He suggests hierarchical industry classifications as a solution where the problem is reduced to a smaller matrix using a Russian doll risk embedding approach. This involves modeling the sub-industries first, then modeling those risk factors using the next granular level of industries and so on until the problem is reduced to a smaller matrix that can be properly calculated.

  • 00:50:00 Zura Kakushadze discusses the process of reducing problems in a stepwise fashion to calculate risk models for quant trading. By calculating a 10 by 10 factor loadings matrix to the sample covariance matrix, Kakushadze can build a one factor model for the remaining factor which is the market, reducing the problem from 2000 by 2000 to one by one. He suggests including style factors into this construction, but notes that their contribution may be limited compared to a larger number of risk factors from various industries. Additionally, style factors may not be good proxies for modeling correlations in pairwise correlations between stocks.

  • 00:55:00 Zura Kakushadze explains why an intercept must be included in the normalization process of style factors. The intercept is necessary because the log of the price, which is typically used as a style factor, is not actually the log of the price but the log of the price divided by a normalization factor. This normalization is an empirical question and can be customized as per the trader's preference. While industry-based factors tend to be reliable proxies for modeling correlations, bilinear combinations of style factors are rather poor proxies. Thus, traders should focus on industry-based factors and customize their models according to their trading model and quantitative trading Alphas.

  • 01:00:00 The speaker discusses the concept of heterosis, which is the combination of powerful ideas such as factor models, industry classifications, and principle components into a construction that can be quite powerful in risk modeling. He explains that clustering techniques can also be used in constructing risk factors through multi-level clustering schemes that can replace fundamental industry classifications. However, one issue with clustering is that it is non-deterministic and can generate different clusterings every time it is run, leading to noise in the system. To reduce the noise, one can either abrogate a large number of clusterings or use other techniques such as dimensionality reduction or principal component analysis.

  • 01:05:00 The speaker discusses different approaches for clustering in quant trading risk models. They explain that while k-means may be non-deterministic, using deterministic alternatives such as hierarchical clustering can be subjective and slower. Additionally, the speaker suggests using risk models themselves to aggregate instead of clusterings. When using k-means, the speaker notes that the initialization of centers for each cluster is what causes the non-deterministic nature of the algorithm, but finding the global minimum is not always necessary. To improve upon the naive approach of using historical returns, the speaker suggests normalizing returns against historical volatilities.

  • 01:10:00 Zura Kakushadze discusses cluster normalization and multi-level clustering for quant trading. He suggests that clustering should be done by dividing returns by variance rather than normalizing returns with two standard deviations to optimize portfolios and improve performance. Kakushadze proposes two ways for multi-level clustering: bottom-up, where the most granular level is created first, then successively clustering clusters, and top-down, where the least granular level is created first, then successively clustering tickers. Additionally, non-deterministic algorithms such as hierarchical algorithms are not advantageous in performance when comparing to deterministic algorithms, and Kakushadze suggests using clustering and aggregation techniques.

  • 01:15:00 The speaker discusses possible ways to fix the number of clusters in a trading model. One option is to use heuristics based on effective rank to determine the number of clusters needed. Alternatively, one could keep the number of clusters as hyperparameters and optimize them through out-of-sample backtests. Additionally, there is a method discussed for aligning clusters produced by different k-means runs and clustering these aligned centers through k-means to generate an alignment of the original k-means runs into k clusters. This method may result in a smaller number of clusters than what was intended, but can still provide a useful model with fewer clusters.

  • 01:20:00 The speaker discusses various ways to aggregate risk models in quantitative trading. One approach is to align the clusters using k-means and drop empty clusters to eliminate noisy clusters, which can be applied as the clustering algorithm. Although the alignment process itself is non-deterministic, it produces a less noisy and sufficient result. Another method involves aggregating the risk models themselves by computing the model covariance matrix based on a single k-means, which is a factor model. However, the corresponding factor covariance matrix may be singular due to small values of p and a large number of clusters, limiting the coverage of the risk space. By aggregating a large number of single k-means-based risk models, a lot more directions in the risk space are covered, resulting in a non-factorized risk model with broader coverage.

  • 01:25:00 Zura Kakushadze discusses the different ways of doing risk modeling and which approach performs better. He explains that statistical risk models based on principle components are the worst performing because they only cover a small portion of the risk space. Machine learning risk models such as clustering perform substantially better because they uncover relationships between returns that are not there on the linear level. However, they still underperform the heterotic risk models based on fundamental industry classification. Humans still beat machines in this aspect because the fundamental industry classifications are based on a thorough analysis of numerous factors, despite the occasional incorrect judgment calls. It is unknown if a machine learning algorithm will ever be able to outperform humans in risk modeling.

  • 01:30:00 The speaker encourages viewers to dive into backtesting and getting hands-on experience with the trading strategies discussed in the video. They provide links to papers and source code that can be used to optimize and adapt the strategies to individual trading styles. Additionally, the organizers mention initiatives of Condensity, including a certification program and a self-paced learning portal, aimed at becoming a global knowledge and technology powerhouse in algorithmic and quantitive trading.
Risk Models For Quant Trading By Zura Kakushadze - May 16, 2019
Risk Models For Quant Trading By Zura Kakushadze - May 16, 2019
  • 2019.05.17
  • www.youtube.com
Learn about using Risk Modelling for the purpose of Quant Trading from none other than the renowned personality, Dr. Zura Kakushadze. Zura is the President a...
 

Forex Trading for Beginners | Algorithmic Trading In FX Markets By Dr. Alexis Stenfors



Forex Trading for Beginners | Algorithmic Trading In FX Markets By Dr. Alexis Stenfors

Dr. Alexis Stenfors delves into a comprehensive analysis of the foreign exchange (FX) market, with a particular focus on liquidity and its significance. He begins by emphasizing the immense size of the FX market and its comparative scale in relation to the global stock market. Despite potential crises or natural disasters, liquidity in the FX market tends to remain robust.

Dr. Stenfors sheds light on the competitive nature of the professional FX market, noting its international scope. Trading a single currency pair in this market is not possible without simultaneously trading another currency pair. This characteristic distinguishes the FX market from the stock market, where buying stocks is more common and straightforward. Furthermore, central banks can intervene in the FX market by influencing the value of a currency through actions such as printing money or direct intervention, whereas such interventions are less common in the stock market. Additionally, the FX market operates without regulations, circuit breakers, and transparency, making it challenging to access reliable data for research purposes.

The core of liquidity in the FX market is explained by Dr. Stenfors, who highlights the importance of relationships and conventions between banks. Unlike traditional stock and equity markets, market makers in the FX market cannot quote prices or provide liquidity unless they know that another party is ready to reciprocate. In the FX swap market, competitors' bid-ask spreads tend to cluster around specific digits, and intriguingly, competitors often quote the exact same spreads rather than offering varied spreads.

Market conventions in the forex trading industry are discussed by Dr. Stenfors, focusing on price- and volume-based conventions. These conventions dictate appropriate trading behavior and facilitate strong relationships between banks and customers. Surveys indicate that only a small percentage of traders follow conventions primarily for profit-making purposes, while the majority perceive them as a means to foster relationships and maintain a positive market image. The rise of algorithmic trading has brought about changes in these conventions, with algorithmic trading accounting for over 70% of trading on platforms like EBS.

The implications of algorithmic trading for the forex market are debated by Dr. Stenfors. Proponents argue that high-frequency trading can enhance market efficiency, reduce transaction costs, and improve liquidity. However, skeptics contend that algorithms are ill-suited for adhering to conventions that were originally designed for human relationships. Traders using electronic platforms may face challenges when the market swiftly moves as they attempt to execute trades. Liquidity is now perceived as complex and difficult to ascertain. Despite differing viewpoints on algorithms, both sides agree that FX liquidity is undergoing changes that require closer examination. Dr. Stenfors presents data from a trading platform indicating an equal split between human and algorithmic trading in 2010.

Examining the volume and liquidity of the forex market, Dr. Stenfors focuses on the euro dollar currency pair as an example. He reveals that over three trading days, the total amount of limit orders for euro dollar was 1.8 trillion, with a narrow spread of only 0.08 percent. This indicates a highly liquid market with tight spreads. However, less than one percent of all limit orders actually resulted in transactions, and the median limit order lifetime was a mere 2.5 seconds. These findings suggest that while the market may appear liquid, its true liquidity might be less significant than it appears. Dr. Stenfors poses the question of whether liquidity can be swiftly accessed and conducts a test to determine if the market reacts promptly to attempted deals.

Dr. Stenfors shares his research on the impact of limit order submissions on liquidity in the FX market. Analyzing 1.4 million limit order submissions, he discovers that a new limit order immediately adds liquidity to the other side of the order book, benefiting high-frequency traders. However, liquidity disappears within 0.1 second, suggesting that algorithmic trading only contributes to short-term liquidity. Dr. Stenfors highlights a significant shift in the willingness to support liquidity in the FX market over the past decade, underscoring the importance of considering various aspects of liquidity, such as price-based liquidity, volume-based liquidity, community-based liquidity, and speed-based liquidity when analyzing the market.

The concept of different order types in forex trading and their ethical implications is explained by Dr. Stenfors. He elucidates that split orders are employed to divide large orders into smaller ones to prevent other traders from canceling their orders and to conceal information-rich orders. However, spoon orders, which create a false impression of the market state, are typically illegal in most markets. On the other hand, ping orders, aimed at extracting hidden market information, are less controversial but subject to interpretation. Dr. Stenfors also introduces his conservative definition of split orders, revealing that they accounted for 15-20% of euro dollar and dollar yen orders among the five currency pairs examined.

Dr. Stenfors delves into the use of split orders and their aggressiveness in the FX market. Contrary to popular belief, large orders often exhibit high aggression, and split orders serve not only to mask larger amounts but also to enable algorithmic traders to submit more aggressive orders. However, the market response to split orders is much more pronounced compared to typical human orders, and algorithms quickly adapt to this strategy, making split orders less effective. The discussion also touches upon spoofing and pinging, indicating that major currency pairs like euro dollar and dollar yen are highly sensitive to information, making them susceptible to spoofing, while pinging is used to extract hidden information by testing the market with orders and observing any reactions.

Dr. Stenfors presents a proxy he developed to analyze the prevalence of "pinging" in various FX markets. A ping order is canceled before any market change occurs, making it a potential indicator of pinging activity. Using a comprehensive database, Dr. Stenfors estimates that around 10% of orders in the Euro Dollar and Yellow Markets may be potential ping orders. However, in markets like Euro Swedish and Dollar Ruble, this percentage increases significantly, reaching as high as 50% and 80% respectively. Notably, pinging appears to be more prominent in less traded markets on the platform. Dr. Stenfors suggests that studying liquidity requires consideration of diverse strategies and order lifetimes, as the market-making function, particularly in the FX pop market, is increasingly being carried out by algorithms.

Dr. Stenfors discusses the evolving nature of liquidity in the forex market and emphasizes the need for a broader range of metrics to assess it. He underscores the impact of barriers in order strategies, such as split-offs, spoofing, and pinging. While these issues have been extensively studied in equity markets, their effects on forex liquidity can be significantly different, despite the larger size of the forex market. Dr. Stenfors recommends that traders remain aware of these complexities regardless of their order submission methods and provides additional resources for those interested in further exploration.

Dr. Alexis Stenfors offers a detailed analysis of the forex market, specifically focusing on liquidity and its various dimensions. His research highlights the unique characteristics of the forex market, including its size, competitive nature, and international scope. He emphasizes the importance of market conventions, the implications of algorithmic trading, and the impact of different order types on liquidity. Through his studies, Dr. Stenfors reveals the complexities and evolving nature of forex liquidity, underscoring the need for comprehensive assessment and understanding in this dynamic market.

  • 00:00:00 Dr. Alexis Stenfors discusses the foreign exchange (FX) market, and in particular, the importance of liquidity. He highlights the size of the FX market and its comparative size to the global stock market. He also notes how liquidity in general is very good, even during times of crisis or natural disasters. Dr. Stenfors then goes on to discuss the competitive nature of the FX market in the professional setting and how it is international, which means that one cannot trade a single currency pair without trading something else as well.

  • 00:05:00 Dr. Alexis Stenfors explains the unique characteristics of the Forex market that differentiate it from the stock market. The Forex market has perfect symmetry in that buying one currency involves automatically selling another, whereas the stock market is biased towards buying stocks. Additionally, central banks can intervene in the Forex market by regulating the value of a currency through printing money or through direct intervention, whereas they do not usually intervene in the stock market. The Forex market is also an unregulated market with no circuit breakers, and it is an OTC market, making it very opaque and difficult to access data for research purposes.

  • 00:10:00 Dr. Alexis Stenfors explains the core of liquidity in the FX market and the different types of liquidity based on pricing, volume, and speed. The liquidity in the market is based on relationships and conventions between banks, which is different from traditional stock and equity markets. Market makers are unable to quote prices or provide liquidity without knowing that another party is there to put them back. In the FX swap market, the bid-ask spread for competitors' prices tend to cluster around certain digits, and the interesting part is that competitors often quote the exact same spreads rather than different.

  • 00:15:00 Dr. Alexis Stenfors discusses the importance of market conventions in the forex trading industry, including price- and volume-based conventions. These conventions relate to appropriate trading behavior and maintaining good relationships between banks and customers. Surveys show that only a small percentage of traders follow conventions to make a profit, while the majority see it as a means of fostering relationships and maintaining a good market image. With the rise of algorithmic trading, these conventions are changing, with a significant increase in algorithmic trading on platforms such as EBS, where it now accounts for over 70% of trading.

  • 00:20:00 Dr. Alexis Stenfors discusses the implications of algorithmic trading for the forex market. While some argue that high-frequency trading can lead to a more efficient market with lower transaction costs and better liquidity, others argue that algorithms are not suitable for following conventions meant for human relationships. Traders who use electronic trading platforms may experience disappointment when the market moves as soon as they try to deal, and liquidity is now viewed as complicated and difficult to pin down. Regardless of one's stance on algorithms, both sides agree that FX liquidity is changing and needs to be looked at more carefully. Dr. Stenfors presents data from a trading platform that was 50% human and 50% algorithmic trading in 2010.

  • 00:25:00 Dr. Alexis Stenfors discusses the volume and liquidity of the forex market, using the example of the euro dollar currency pair. He notes that during three trading days, the total amount of limit orders for euro dollar was 1.8 trillion, with a spread of only 0.08 percent, making it a very liquid market with tight spreads. However, he goes on to discuss that only less than one percent of all limit orders actually resulted in a transaction, and the median limit order lifetime was just 2.5 seconds, suggesting that while the market appears liquid, it may be less liquid than it seems. He then poses the question of whether liquidity can be drawn quickly, and conducts a test to check whether the market moves as soon as a deal is attempted.

  • 00:30:00 Dr. Alexis Stenfors discusses his research on the impact of limit order submissions on liquidity in the FX market. He analyzed 1.4 million limit order submissions and found that a new limit order immediately adds liquidity to the other side of the limit order book, which is beneficial for high-frequency traders. However, liquidity disappears after 0.1 second, and this is consistent with the idea that algorithmic trading is only good for liquidity in the very short term. Furthermore, he points out that there has been a significant change in the willingness to support liquidity in the FX market over the last ten years. Therefore, it is important to consider price-based liquidity, volume-based liquidity, community-based liquidity, and speed-based liquidity when analyzing the market.

  • 00:35:00 Dr. Alexis Stenfors explains the concept of different order types in forex trading and their ethical implications. He explains that split orders are used to break down large orders into smaller ones to prevent other traders from canceling their orders and to hide the information-rich order. However, spoon orders are illegal in most markets as they create a false impression of the state of the market. Ping orders are intended to extract hidden information about the market and are not considered controversial, but their significance varies according to interpretation. The section also talks about Dr. Stenfors' conservative definition of split orders, which resulted in 15-20% for euro dollar and dollar yen in the five currency pairs examined.

  • 00:40:00 Dr. Alexis Stenfors discusses the use of split orders and their aggressiveness in the FX market. Contrary to popular belief, large orders are often very aggressive, and split orders are used not only to disguise larger amounts but also to enable algorithmic traders to submit more aggressive orders. However, the reaction to a split order is much stronger than to a typical human order, and algorithms are quick to pick up on this, making these order splitting strategies less successful. Dr. Stenfors also touches on the subject of spoofing and pinging, explaining that, contrary to common belief, major currency pairs such as the euro dollar or dollar yen are extremely sensitive to information, making them highly susceptible to spoofing, while pinging is used to extract hidden information by testing the waters with orders and observing any reactions.

  • 00:45:00 Dr. Alexis Stenfors discusses a proxy he created to analyze how prominent "pinging" is in different FX markets. A ping order is an order that is canceled before any change occurs in the market, making it a potential ping order. Dr. Stenfors used a database to work out how many orders might be potential ping orders and found that it's around 10% in Euro Dollar and Yellow Markets and as high as 50% in Euro Swedish and 80% in Dollar Ruble. The interesting fact here is that pinging seems to be more prominent in the less traded markets on the platform. This means the ruble trading on the platform is very large but has no actual trading going on, and almost 80% is probably pinging orders by algorithmic traders. Dr. Stenfors suggested that if you are studying liquidity, there are many ways to study it, and one important thing is to look at different strategies and work out the lifetime of the order, as the market-making function, especially in the FX pop market, is shifting towards being done more and more by algorithms.

  • 00:50:00 Dr. Alexis Stenfors discusses the changing liquidity of the forex market and the need for a wider range of metrics to assess it. He also highlights the impact of barriers for order strategies, which can result in split-offs, spoofing, and pinging. Although these issues have been widely researched in equity markets, their impact on liquidity in the forex market can be vastly different, despite its larger size. Dr. Stenfors recommends that traders should be aware of these complexities, regardless of how they submit orders, and provides resources for those interested in learning more.
Forex Trading for Beginners | Algorithmic Trading In FX Markets By Dr. Alexis Stenfors
Forex Trading for Beginners | Algorithmic Trading In FX Markets By Dr. Alexis Stenfors
  • 2019.01.31
  • www.youtube.com
The turnover in the global FX market is almost ten times larger than in all stock markets combined. However, surprisingly little is known about HFT and algor...
 

Develop And Backtest Your Trading Strategies | Full Tutorial



Develop And Backtest Your Trading Strategies | Full Tutorial

The video begins by introducing an experienced quant who will provide guidance on developing and executing trading strategies using Blueshift, a cloud-based platform. Blueshift offers comprehensive data sets, including US and Indian equity markets, as well as detailed Forex data. The session covers systematic strategies, a primer on Python, an introduction to Blueshift, creating reusable templates for backtesting, technical indicators, constructing a simple strategy using a single indicator, and managing portfolio strategies. Importantly, the session does not offer trade recommendations or claim to provide foolproof strategies.

The speaker highlights the different approaches to trading styles, such as fundamental, technical, and quant, and how they treat trends, mean reversion, breakouts, and carry in unique ways. Designing a systematic trading strategy involves selecting securities, generating buy and sell signals, computing target portfolios, executing trades, and continuously improving the process. The speaker explains the inputs required for systematic strategies, including price data and its transformations, fundamental and non-market information, and trading rules/logic. These rules can be developed based on a trader's hypothesis or through data-driven techniques like machine learning and artificial intelligence.

The speaker emphasizes the importance of testing trading strategies through backtesting and forward testing. Backtesting helps traders verify the validity of their hypotheses, while forward testing guards against biases and pitfalls like data mining biases, survivorship biases, market impact modeling, and look-ahead biases. A flexible backtesting platform is essential for adjusting and modifying strategies, and risk management and portfolio creation are crucial as not all strategies perform well in every market. The speaker provides a brief introduction to using Python-based code in the Blueshift platform for strategy creation and testing.

The video explains the four essential functions required for backtesting trading strategies on Blueshift. These functions are "initialize," which sets up initial parameters, "before_trading_start," called before each trading session, "handle_data," executed at each new price bar arrival, and "analyze," used for strategy analysis. The speaker demonstrates the order in which these functions are called and how traders can position their code within each function. The section concludes with a basic introduction to using Python in the Blueshift platform.

For viewers unfamiliar with Python, the video offers a primer on Python basics. It covers variables, strings, integers, floats, and data structures like dictionaries and lists. The creation of functions and classes in Python is also introduced. The video then delves into the Blueshift workflow, explaining the "initialize," "before_trading_start," "handle_data," and "analyze" steps. The usefulness of scheduled and ordering functions is highlighted.

The presenter discusses the three primary ordering functions in Blueshift. The first function, "order_percent_target," allows traders to take positions in underlying assets based on the target portfolio's weight. The second function, "get_open_orders," provides the number of pending orders, and the third function, "cancel_order," allows cancellation of orders. The presenter emphasizes the importance of controlling the trading environment and demonstrates functions like "set_commission," "set_slippage," and "set_account_currency." The "context" and "data" objects in Blueshift are explained, showcasing their role in capturing algorithm state and accessing data. An example illustrates accessing the portfolio and data for a simple buy-and-hold strategy using the "history" function. The concept of scheduling using the "schedule" function is introduced, allowing users to define when specific functions should be called.

The tutorial focuses on creating a template to streamline strategy development and avoid repetitive code. Technical indicator libraries like TLE and standard libraries like Pandas and Numpy are imported. The universe of securities is narrowed down to major indices, and the "context" variable is initialized as a dictionary to store strategy parameters. These parameters include indicator look back, buy/sell thresholds, moving average periods, RSI, B-bands, ATR, and trade frequency. This template aims to minimize boilerplate code and standardize parameters for easy modifications.

The speaker introduces a variable to control trading and create a portfolio with weights for each instrument in the universe. They set commission and slippage to zero for demonstration purposes. The "handle_data" function is defined to execute trading every 15 minutes. The "run_strategy" function becomes the main function for running the strategy. It retrieves past prices and computes weights before rebalancing using the "context.universe.prices" function. The "rebalance" function iterates through all securities in the universe and places orders to achieve target weights. An anonymous function is defined to print the context portfolio and weights, and a "advisor" class is created to compute the weight object.

The speaker explains how to define inputs for the "advisor" class, including the name and signal function, and how to pass the stock selection universe. They cover initialization and storing the advisor's performance, as well as defining the main function that calls the signal function to generate buy/sell signals. The speaker emphasizes defining the signal function based on technical indicators, often expressed as weighted functions of past prices. They recommend referring to theoretical papers from experts like Cliff Asness of AQR Capital Management.

Technical indicators and their correlation to the market are discussed based on statistical analysis using principal component analysis. Technical indicators act as filters on past prices or returns, capturing long or short-term trends by filtering high or low-frequency data. However, technical indicators can be self-fulfilling prophecies and are susceptible to certain types of trading algorithms that can lead to momentum or stop-loss hunting. It's important to have a portfolio of different indicators when developing and backtesting trading strategies.

The instructor explains importing the technical analysis library and lists available technical indicators. Using the example of Bollinger Bands, the instructor demonstrates the function "Bbands" to retrieve the last row's value. Other functions like RSI, MACD, Fibonacci support, resistance, etc., are also showcased. The instructor explains the "get_price" function and the "handle_data" function, which checks if it's time to trade for each period. The "run_strategy" function looks for suitable arguments using the "advisor_compute_signal_price" function, followed by the "rebalance" function to place orders for target percentages. Finally, the "analyze" function is used for strategy analysis.

The speaker focuses on managing strategy portfolios to enhance algorithmic trading profits. Instead of relying on a single strategy, running multiple strategies simultaneously or in different periods is recommended. Four methods for managing strategy portfolios are discussed: creating a committee, using a regime switching model, dynamic allocation, and factor-based investing. Averaging can improve signal stability. The strategy's code involves adding an agent responsible for selecting advisors and allocating capital. The agent uses a weighing function to update advisor weights, which affect the rebalance function.

The speaker explains how to define and weigh portfolios based on the number of advisors, with equal allocation for each. They demonstrate creating separate expert advisors and an agent to allocate capital among them. A backtest using QuickBacktest shows significantly improved performance compared to individual cases. The speaker emphasizes the importance of drawdown in a trading strategy and suggests looking at the Sortino ratio and the stability of the profit and loss curve. The equal weighted average input portfolio significantly improves performance, but there is room for further improvement.

The speaker introduces the concept of "no-regret trading," which involves determining the best-performing investment strategy in a difficult-to-predict market. Rather than relying on a single investment, the strategy involves varying the weights of each investment. The speaker recommends using the exponential gradient algorithm to determine weights, adjusting them based on the portfolio's response to market scenarios. The Kelly criterion is also suggested for capital allocation, maximizing return versus variance based on geometric Brownian motion.

The speaker explains the output of weights and how they differ for different advisors. They test a random signal that ideally receives less allocation compared to other signals if it is genuinely random. The speaker discusses the agent function, which takes a list of advisors and a learning rate parameter, and computes the weight function. It iterates through the advisors list, computes the advisor signal, aggregates them sector-wise, and updates the context weights based on the computed weight. The section concludes with guidelines on strategy development, including avoiding overfitting, checking account leverage, and providing a list of demo strategies for viewers to explore.

The speaker discusses different methods of forward testing, such as paper trading or trading with a small amount of capital in live markets. They mention that BlueShift currently does not support PI torch or Jupiter Notebook but plans to support Keras and TensorFlow. The platform is not limited to Indian markets and can access US and Indian equity data as well as FX data. The speaker notes that BlueShift does not have built-in debugging tools at the moment but considers adding them in the future.

The speaker talks about option backtesting and mentions that most platforms offering it are unreliable or require extensive data cleaning and arrangement. They also note that Indian Gravitons only support liquid futures and do not allow third-party data feeds. The recommended minimum backtesting time period depends on trading frequency, and although one-minute data for Indian markets is available, optimization runs are not efficient due to technology limitations. BlueShift does not have any fees, and there are no restrictions on the number of simultaneous backtests, as long as the website traffic can handle them. Backtesting for PSA and using Python packages is possible, but there is a restricted list of available packages for security reasons.

The speaker explains that backtesting is a crucial step in developing and evaluating trading strategies. It helps determine if a strategy is viable and profitable before deploying it in live markets. They highlight the importance of considering transaction costs, slippage, and other real-world factors when backtesting to ensure realistic results.

The speaker introduces the BlueShift platform, which provides an environment for backtesting and deploying trading strategies. BlueShift supports backtesting on Indian equity, US equity, and forex markets. Users can write and test their strategies using Python and leverage various built-in functions and libraries. The platform also allows users to paper trade their strategies or trade with real capital, depending on their preferences.

The speaker emphasizes the significance of forward testing, which involves deploying a strategy with a small amount of capital in live markets. This helps validate the strategy's performance and behavior in real-time conditions. They mention that BlueShift currently supports forward testing for Indian markets, and users can paper trade with a virtual capital of up to 1 crore (10 million) Indian Rupees.

Option backtesting is also discussed, with the speaker mentioning that many existing platforms for option backtesting are unreliable or require extensive data cleaning and preparation. They note that BlueShift does not currently support option backtesting but may consider adding it in the future.

Regarding data availability, the speaker mentions that BlueShift provides historical data for Indian equity, US equity, and forex markets. However, they note that optimizing strategies with one-minute data for Indian markets may not be efficient due to technological limitations.

The speaker clarifies that BlueShift does not have any fees for backtesting or using the platform. Users can conduct as many backtests as they want, as long as the website traffic can handle the load. They also mention that BlueShift has a restricted list of available Python packages for security reasons but users can still leverage popular packages like pandas and numpy.

The speaker highlights the importance of thorough backtesting and forward testing in strategy development. They encourage users to leverage the BlueShift platform for backtesting and deploying their trading strategies, while keeping in mind the limitations and considerations discussed during the presentation.

  • 00:00:00 The host introduces a seasoned quant who will guide viewers on how to develop and execute trading strategies using Blueshift, a cloud-based platform for strategy development. The quant describes Blueshift as having data sets that include US and Indian equity markets, as well as cheated Forex with a minute level data. The session will cover a brief points about systematic strategies, a short primer on Python, an introduction to Blueshift, creating a reusable template for back testing, technical indicators, creating a simple strategy using a single technical indicator, and managing a portfolio strategy in different ways. The session is not about trade recommendations or giving the best strategies that always work.

  • 00:05:00 The speaker mentioned that different trading styles like fundamental, technical, and quant, treat the trending, mean reversion, breakouts, and carry in different ways. They also discussed how to design a systematic trading strategy, which involves selecting the universe of securities, generating buy and sell signals, computing target portfolios, executing the strategy, and continuously improving the process. Additionally, the speaker explained the inputs to develop systematic strategies, like prices and its transformations, fundamental and non-market information, and trading rules or logic, which can be developed by a trader's hypothesis or by letting the data tell the rules using machine learning and artificial intelligence.

  • 00:10:00 The speaker discusses the importance of testing trading strategies, including backtesting and forward testing. Backtesting allows traders to establish if their hypothesis is correct by testing their strategies, while forward testing guards against biases, such as data mining biases, survivorship biases, market impact modeling, and look-ahead biases. The speaker emphasizes the need for a flexible backtesting platform to adjust and modify strategies, but also highlights the importance of portfolio creation and risk management, as not all strategies perform well in all markets. Finally, the speaker gives a brief primer on using Python-based code in the blue shift platform to create and test trading strategies.

  • 00:15:00 The speaker explains the four different functions needed for backtesting trading strategies using a specific platform. The first function is initialize, used to set up the initial parameters for the backtesting. The second is called before trading starts, which is called every day before the trading session opens. The third function is handle data, called at every new price bar arrival, and the final function is called analyze. The speaker also shows the order in which each function is called based on a data set selected and how one can determine where to put their code in each function. The section ends with a brief introduction to using Python for coding.

  • 00:20:00 Python basics are explained for those who may not be familiar with the language. The use of variables, strings, integers, and floats are discussed, as well as data structures such as dictionaries and lists. The creation of functions and classes in Python is also introduced. The video then moves on to explain the four steps in the Blueshift workflow: initialize, before_trading_start, handle_data, and analyze. The usefulness of scheduled and ordering functions is also explained.

  • 00:25:00 The presenter discusses the three main ordering functions used in Blue Shift, a trading platform. The first function is order percent target, which is used to take positions in underlying assets in the weight of your target portfolio. The second function is get open orders, which provides the number of orders that need to be executed, and the third function is cancel order. Additionally, the presenter explains the importance of controlling your trading environment and provides examples of how to implement this using functions such as set commission, set slippage, and set account currency. The presenter also explains the context and data objects in Blue Shift, how they are used to capture the state of the algorithm and access data, and provides an example of how to access your portfolio and data in a simple buy-and-hold strategy using the history function. Finally, the presenter introduces the concept of scheduling using the schedule function, which can be used to define when to call a function in terms of days and time.

  • 00:30:00 The tutorial focuses on creating a template for traders to use in order to avoid repetitive code. The tutorial imports technical indicator libraries such as TLE and standard libraries for Pandas and Numpy. The universe is then reduced to two major indices, and the context variable is initialized as a dictionary for storing parameters for strategies in a single place. The parameters include indicator look back, threshold for buy and sell, and periods for fast moving and slow moving averages, RSI, B-bands, and ATR, and trade frequency. This template is useful for minimizing boilerplate code and standardizing parameters for easy changes.

  • 00:35:00 The speaker adds a variable to control the trading and create a weights portfolio for each instrument in the universe. They set the commission and slippage to zero for demo purposes. The handle_data function is defined to ensure that the trading occurs every 15 minutes. The run_strategy function is created as the main function to run the strategy. The function calls context.universe.prices to get the past prices of the selected futures and compute the weights before rebalancing. The rebalance function is used to look through all the securities in the universe and place an order to achieve target weights. The speaker also defines an anonymous function to print the context portfolio and weights at the end and creates a class called advisor to compute the weight object.

  • 00:40:00 The speaker discusses how to define the inputs for the advisor class, including the name and signal function, and how to pass on the stock selection universe. They also go over how to initialize and store the performance of the advisor, as well as define the main function that will call the signal function to generate signals for buying or selling stocks. The speaker emphasizes the importance of defining the signal function based on technical indicators, which can be expressed as weighted functions of past prices. They also recommend looking at theoretical papers from experts in the field, such as Cliff Asness of AQR Capital Management.

  • 00:45:00 The speaker discusses technical indicators and their correlation to the market based on statistical analysis through principal component analysis. Technical indicators can be thought of as a type of filter on past prices or past returns, filtering high or low-frequency data to pick up on long or short-term trends. However, technical indicators can be self-fulfilling prophecies, making them useful for making money but subject to certain categories of trading algorithms that can result in momentum or stop-loss hunting. Additionally, just because a momentum indicator shows momentum, it does not necessarily mean that the market is in momentum. Therefore, having a portfolio of different indicators can come in handy when developing and backtesting trading strategies.

  • 00:50:00 The instructor explains the technical analysis library import and the list of technical indicators available. They use the example of the Bollinger Bands function calling the library function 'Bbands' to return the last row's value and show other functions like RSI, MACD, Fibonnaci support, resistance, etc. The instructor also explains the 'gate price' function and the 'handle data' function that is called for each period to check if it is time to trade. The 'run strategy' function then looks for appropriate arguments using the 'advisor compute signal price' function, followed by the 'rebalance' function looping through all securities in the universe to place an order to achieve the target percentage. Finally, the 'analyze' function is used to analyze the backtested strategy.

  • 00:55:00 The speaker discusses managing strategy portfolios to improve algorithmic trading profits. Instead of relying on a single strategy, the speaker suggests running multiple strategies simultaneously or in different periods of time. The speaker offers four methods for managing strategy portfolios: creating a committee, regime switching model, dynamic allocation, and factor-based investing. By taking an average, we can improve our signal's stability. The strategy's code involves adding an agent responsible for choosing advisers and allocating capital. The agent uses a weighing function to update the weights of each advisor, which are taken into account in the rebalance function.

  • 01:00:00 The speaker explains how they define and weigh portfolios based on the number of advisors, with equal allocation for all of them. They show how to create separate expert advisors and then create an agent to allocate capital between them. They run a backtest using QuickBacktest, which shows a significant improvement in performance compared to the individual cases. The speaker emphasizes the importance of drawdown in a trading strategy and recommends looking at the Sortino ratio and the stability of the profit and loss curve. Overall, the equal weighted average input portfolio has significantly improved performance but the speaker indicates there is still room for improvement.

  • 01:05:00 The speaker discusses a concept called "no-regret trading" which involves trying to determine which investment strategy performs the best in a market where it is hard to predict future trends. The strategy involves varying the weights of each investment rather than relying on one investment to outperform the others. The speaker recommends using the exponential gradient algorithm to determine the weighting, which adjusts the weight according to the response of the portfolio to market scenarios. The speaker also suggests using the Kelly criteria for allocating capital and maximizing return versus variance, based on geometric Brownian motion, to determine the weighting.

  • 01:10:00 The speaker explains the output of the weights and how they differ for different advisors. They then test a random signal which should ideally get less allocation compared to other signals if the function is truly random. The speaker also talks about the agent function which takes a list of advisors and a learning rate parameter and computes the weight function. It loops through the advisors list, computes the advisor signal, adds them up sector-wise, and sends the computed weight back to the context weights. The speaker then concludes the section with some guidelines on strategy development, including avoiding overfitting and checking account leverage, and provides a list of demo strategies for viewers to explore.

  • 01:15:00 The speaker discusses the different ways of forward testing, including paper trading or trading with a small amount of capital in a live market. They also mention that BlueShift does not support PI torch or Jupiter notebook at this time, but is planning to support Keras and TensorFlow. In addition, the platform is not limited to Indian markets and can access US and Indian equity data and FX data. The speaker also notes that BlueShift does not have built-in debugging tools at this time, but is considering adding them in the future.

  • 01:20:00 The speaker discusses option backtesting and explains that most platforms that offer it are unreliable or require a lot of data cleaning and arranging. They also mention that Indian Gravitons only support liquid futures and do not allow any third-party data fed. The recommended minimum time period for backtesting depends on trading frequency, and while one-minute data for Indian markets is available, optimization runs are not efficient due to a lack of technology and a preference for optimizing parameters based on expected returns. Blue Shift does not have any fees, and there are no restrictions on the number of separate backtests that can be run simultaneously, as long as the website traffic can accommodate them. It is also possible to do backtesting for PSA and use Python packages, although there is a restricted list of available packages for security reasons.
Develop And Backtest Your Trading Strategies | Full Tutorial
Develop And Backtest Your Trading Strategies | Full Tutorial
  • 2018.08.29
  • www.youtube.com
This tutorial is your step by step guide to learn to develop and backtest your trading strategies using technical indicators along with portfolio management ...
 

Forex Trading Strategies | Develop and Backtest Trading Ideas | Full FX Tutorial



Forex Trading Strategies | Develop and Backtest Trading Ideas | Full FX Tutorial

During this informative webinar, the speaker provides a comprehensive overview of Quantiacs BlueShift, a powerful strategy development platform for systematic trading strategy research and backtesting. The platform offers a range of features and functionalities that make it an ideal tool for traders.

BlueShift is a cloud-based platform, which means users can access it from anywhere, allowing them to develop and analyze strategies on the go. It provides users with inbuilt financial datasets, making it convenient to access relevant market data for strategy development.

While the webinar primarily focuses on the foreign exchange (FX) market, the BlueShift platform also supports equity and futures trading across various markets. It emphasizes that the intellectual property of the backtesting strategies developed on the platform belongs entirely to the user, ensuring confidentiality and ownership.

The speaker delves into the nature of the foreign exchange market, highlighting its status as the largest decentralized market with a staggering daily trading volume of approximately 5 trillion dollars. Within this volume, around 300 billion dollars can be attributed to retail trading. The speaker discusses several factors that differentiate the FX market from the equity market, such as higher leverage, easier shorting opportunities, and relatively lower volatility.

To understand what drives the forex market, the speaker points out the significance of macroeconomic factors such as balance of payments, interest rates, inflation, economic growth, and fiscal policies. They also mention that corporate and hedging flows, as well as sudden political and geopolitical changes, can have a considerable impact on the market. However, it's important to note that there is no standard or widely accepted methodology for valuing the forex market. The speaker briefly mentions methods such as purchasing power parity and real effective exchange rate, with more advanced techniques preferred by large institutions and the International Monetary Fund (IMF). Additionally, the speaker emphasizes the importance of short-term funding markets in driving liquidity and determining overnight rollover costs.

When it comes to developing and backtesting forex trading strategies, the speaker introduces various approaches. Economic models, such as the monetary model and the behavioral equilibrium exchange rate model, use econometric methods to analyze data. Data-driven models, including time series forecasting, non-linear time series, and neural networks, are also discussed as viable options for short-duration forex trading. The BlueShift platform is presented as a user-friendly interface that facilitates strategy development and testing. Users can input datasets, starting capital, and metadata descriptions, among other details. The platform provides tools for full backtesting as well as running quick backtests. Built on Python's Zipline API, BlueShift offers a standard strategy template for users to begin their development process.

The speaker elaborates on the basic structure of forex trading strategies and the key functions required for backtesting. They explain the "initialize" function, which sets up Baptist parameters and accounting parameters. The "before trading start" function is called once per day at the start of the trading session, followed by the "handle data" function, which is called every minute for the mini dataset. Finally, the "strategy" function is scheduled using the API for a specific time and date, and the rules are defined by the user. After running a quick backtest, users can access the Baptist tab to view different sets of data, including the equity curve, tear sheets, and other statistics.

The tear sheet, explained by the speaker, provides a set of reports for analyzing trading strategies. It includes parameters such as the maximum Omega ratio, Sortino ratio, skewness, kurtosis, stability of the time series, and more. The speaker demonstrates the workflow using BlueShift, which involves initializing, going through "before trading start" and "handle data," and utilizing various API functions such as scheduling, setting commissions, setting slippage, and setting account currency. The speaker mentions the availability of a standard template for forex trading strategies.

The speaker mentions the availability of a standard template for forex trading strategies in the BlueShift platform. This template provides a starting point for users to develop their strategies by defining their entry and exit rules, risk management parameters, and other customization options.

The BlueShift platform also offers a wide range of built-in technical indicators, including moving averages, oscillators, and trend-following indicators, which can be used to build trading rules and signals. Users can combine these indicators with their own custom logic to create unique and personalized strategies.

To validate and evaluate the performance of a trading strategy, the speaker emphasizes the importance of conducting rigorous backtesting. BlueShift allows users to backtest their strategies using historical data to simulate real-world trading scenarios. The platform provides comprehensive performance metrics, including profitability, drawdown analysis, risk-adjusted returns, and various ratios like Sharpe ratio, Sortino ratio, and Calmar ratio.

Once a strategy has been backtested and validated, the speaker suggests the next step is to deploy it in a live trading environment. BlueShift provides integration with multiple brokerages, allowing users to execute their strategies directly from the platform. This seamless integration ensures a smooth transition from strategy development to live trading.

The speaker concludes the webinar by highlighting the benefits of using BlueShift for forex strategy development and backtesting. The platform offers a user-friendly interface, access to diverse financial datasets, and a comprehensive set of tools and indicators. It empowers traders to develop, test, and deploy their forex trading strategies with ease and efficiency.

The webinar provides a detailed overview of the BlueShift platform, its capabilities, and its application in forex trading strategy development. It offers valuable insights into the forex market, different modeling approaches, and the importance of robust backtesting. Traders looking to enhance their forex trading strategies may find BlueShift to be a valuable tool in their arsenal.

  • 00:00:00 The speaker provides an overview of Quantiacs BlueShift, which is a strategy development platform for systematic trading strategy research and backtesting. It includes inbuilt financial datasets, and it is available on the cloud, so users can develop and analyze strategies on the go from anywhere. The webinar focuses mainly on FX but also covers equity and futures across various markets, and the intellectual property of the developed backtesting strategies belongs entirely to the user. The speaker goes on to describe the foreign exchange market, which is the most prominent decentralized market with a daily volume of approximately 5 trillion, 300 billion of which is retail volume. Factors that differentiate it from the equity market include higher leverage, easy shorting, and a lower level of volatility, which the speaker discussed in detail.

  • 00:05:00 The speaker discusses what drives the Forex market, highlighting macroeconomic factors like balance of payments, rates, inflation, economic growth, and fiscal policy. Corporate and hedging flows, as well as significant events like sudden political and geopolitical changes, can also have a significant impact on the market. The speaker notes that there is no standard or widely accepted methodology for valuing the Forex market, although some methods include purchasing power parity and real effective exchange rate, with more advanced methods preferred by large institutions and the IMF. The speaker also emphasizes the importance of short-term funding markets, as they drive liquidity and determine overnight rollover costs.

  • 00:10:00 The speaker discusses different approaches to developing and backtesting forex trading strategies. One approach is through economic models such as the monetary model and the behavioral equilibrium exchange rate model, both of which use econometric methods to analyze data. Other data-driven models such as time series forecasting, non-linear time series, and neural networks can also be used for short-duration forex trading. The speaker then introduces the BlueShift platform, which provides users with a user-friendly interface for developing and testing their trading strategies by allowing users to input data sets, starting capital, and metadata descriptions, among other things, and then provides tools for full-back testing and running a quick backtest. The platform is built on Python's Zipline API and provides a standard strategy template for users.

  • 00:15:00 The speaker discusses the basic structure of forex trading strategies and the key functions that are required for backtesting. The first function is called "initialize," which sets up the Baptist parameters and accounting parameters. The second function is "before trading start," which is called once per day at the start of the trading session, followed by "handle data," which is called every minute for the mini data set. Finally, the "strategy" function is scheduled using the API for a specific time and date, and the rules are defined by the user. After running a quick Baptist, the user can access the Baptist tab to view different sets of data, including the equity curve, tear sheets, and other statistics.

  • 00:20:00 The speaker discusses the tear sheet and its usefulness in providing a set of reports for analyzing trading strategies. The tear sheet includes parameters like maximum Omega ratio, Sortino ratio, skewness, kurtosis, stability of time series, and more. The speaker also explains the workflow using Blueshift, which starts with initializing, going through before trading start, handle data, and using the useful API functions like schedule function, set commissions, set slippage, and set account currency. For the foreign exchange market, a standard template is available to start with, which includes parameters for strategies and imports data like GDP, inflation, short rates, and long rates from the finance module in Zip Line.

  • 00:25:00 The speaker discusses how to set up a basic template for developing forex trading strategies. They explain the importance of maintaining parameters in one central place, defining the universe and using a scheduled function to compute rollovers. They also detail how to set commissions and slippage and redefine how to compute rollovers and technical indicators. They mention the technical analysis library as a useful resource for accessing built-in technical indicators. Finally, they emphasize that the backtest run can be cancelled at any point and suggest using this basic template to begin developing more complex strategies.

  • 00:30:00 The speaker discusses systematic strategies in forex and how they revolve around finding and exploiting factors systematically. Risk factors, such as value, momentum, carry, and defensive strategies, are the four basic factors in forex trading. Value focuses on ranking currencies in terms of valuation, while momentum relies on the difference in time series and cross-sectional momentum to go long on top-ranked securities and short on those ranked at the bottom. Carry strategies exploit the difference in interest rates between currency pairs. Finally, defensive strategies presume low-risk currencies are undervalued while high-risk currencies are overvalued, and they focus on risk-adjusted returns.

  • 00:35:00 The presenter demonstrates how to develop and backtest various trading ideas using the BlueShift platform. Specifically, he introduces a new function called signal function carry, which calculates the rates differential for each currency pair in the trading universe and sorts them to take long positions for the top numbers, short positions for the bottom numbers, and 0 positions for others. The same approach is applied to the momentum and value factors, and a factor basket strategy is also created by combining the three preceding strategies. The presenter emphasizes that the effort required to develop different strategies is minimal, as it primarily involves defining the relevant signal functions and calling them in the appropriate places in the rebalance function.

  • 00:40:00 The speaker explains how different Forex trading strategies can be created with minimal work using a template that does most of the work automatically. The speaker also shares a strategy spectrum that plots the type of strategies that can be explored depending on one's trading style, whether they are a quant, technical day trader, or fundamental trader. On the horizontal axis, the spectrum shows the Genesis of one's profits, whether it is a trending market, a mineral market, a breakout, or a carry market, which is almost flat. The speaker then goes on to explain different trading strategies for each trading style, such as momentum type of strategies, time series and cross-sectional strategies, and statistical arbitrage, among others.

  • 00:45:00 The speaker discusses the importance of combining fundamental, technical, and quantitative analysis when trading Forex. They explain that while technical and quantitative analysis are typically easier to deploy and create confidence in systematic strategies, the most value from a fundamental trading style comes from event-based trading. The speaker then outlines the design cycle for a systematic trading strategy, which includes selecting a universe, generating a signal, deciding on a target portfolio, and analyzing performance for continuous improvement. They also touch on the importance of avoiding back-testing errors, such as lookahead bias, and utilizing a robust platform like Blueshift for even-driven back-testing.

  • 00:50:00 The speaker discusses the various steps involved in creating a Forex trading strategy, starting with the ideation phase and then moving onto the backtesting phase. He stresses the importance of creating uncorrelated strategies since two
    strategies are always better than one. The speaker also mentions different methods for risk capital allocation such as LE criteria, equal-weighted, and momentum-weighted strategies. Additionally, he provides an example strategy using the Bollinger Bands technical indicator and shows the impressive statistics of the backtest results. He concludes by highlighting the importance of measuring the stability of the strategy's return over time to ensure consistency and avoid overfitting.

  • 00:55:00 The speaker discusses various trading strategies they have developed, including a momentum-based strategy and a correlation-based momentum trade strategy. They also offer an "FX Daily" template that computes various technical indicators at the start of each day and uses them to decide whether to go long or short. The speaker emphasizes the importance of back-testing strategies in a scientific way and avoiding the pitfalls of optimization where a strategy may perform well in back-testing but fail in live trading. The goal should be to optimize forward-looking live performance rather than back-testing performance based on a small set of variations.

  • 01:00:00 The speaker discusses the issue of over-optimization when developing and backtesting trading ideas. Over-optimization can lead to a decrease in the Sharpe ratio, resulting in ineffective live trading. The speaker suggests four options to combat this problem. One suggestion is to use adaptive strategies that react to market changes. Another suggestion is to use statistical solutions such as change point analysis or hidden Markov models to flip strategies based on changes in the market. The third suggestion is to do stable factor research to identify factors that have been proven theoretically and empirically to provide profitable trades. Lastly, the speaker recommends using out-of-sample testing, which involves testing the model on data that has not been used in the optimization process to ensure that the model is not overfit.

  • 01:05:00 The video discusses the importance of extracting and isolating factors that can lead to stable and consistent returns in forex trading. One such factor is momentum, which has a strong empirical foundation and can be a good strategy in any market, except for the occasional momentum crashes. The video also talks about validation techniques, such as cross-validation, which can be challenging in the FX market since it breaks down the continuity of the time series. Instead, traders can count the number of signals generated and the duration for each trade to randomize another set of signals and compare them to the backtested ones to determine how robust the strategy is. Additionally, the video emphasizes that automation is not a black box and that traders should understand the underlying factors that drive P&L and the risks involved with each strategy.

  • 01:10:00 The speaker suggests that strategy is not about man versus machine but rather about man and machine working together. The human brain is better suited to developing hypotheses, while machines are faster at chasing them. In terms of strategy development advice for users of the Blue Shift platform, the speaker recommends using all strategy parameters in the context environment, checking account leverage, using the schedule function for weekly or daily strategies, testing practice results, and checking for overfitting. Users are also encouraged to try the specific Forex strategies available on the platform's Github account and reach out for support if needed. Finally, Liza from FXCM invites users to contact them for any questions they may have about FX market data.

  • 01:15:00 The speaker answers various questions from users, such as whether the session will be recorded (yes), if they can trade live (no), and if they will talk about platform and strategy testing (already answered). They also state that they currently cover US and Indian equity markets, as well as the top 10 currencies through fxcm, but plan to add crypto soon. The speaker also addresses the issue of debugging, mentioning that while they do not have a good debugger at present, basic print statements can be used. Finally, they mention that Python does not allow for "supply not tally" but do not understand what the user means by this.

  • 01:20:00 The speaker discusses the difficulty of finding a small amount of historical data that accurately covers all expected market moves in the forex market due to its close link to macroeconomic factors. It is challenging to define a data set that can represent all expected market conditions. While the speaker cannot recommend any particular book for beginners to learn forex trading, he suggests following research articles from central banks such as the IMF, which provides foreign currency-oriented reports that are a good starting point for beginners. In terms of high-frequency trading, sending thousands of orders per second is usually not sustainable for retail traders, and the speaker does not suggest dividing data into in-sample and out-of-sample tests. Instead, randomized testing is recommended to generate random signals.

  • 01:25:00 The speaker discusses backtesting and factor-based investing. They emphasize the importance of analyzing the signals and duration of trades in backtesting to gain a better understanding of the results. They also discuss the potential risks of factor-based investing, such as overcrowding and the fact that factors are beta, meaning they do not consistently work. However, they do suggest that factor-based investing can be good for non-technical individuals in the long term. The speaker also addresses questions about the necessary statistical background for trading and the availability of additional Python libraries for analysis. They conclude that while basic knowledge of Python is helpful, the focus should be on developing strategy logic rather than expertise in the programming language. However, there are currently no built-in functions available for resampling in 15-minute intervals due to potential performance and alignment issues.

  • 01:30:00 The speaker believes that it's best to create the reefs and ampuls, store them in a database, and give them as a ready-made output rather than creating a mystery sampling library so users can use them more efficiently. In terms of price action strategies, the speaker cautions that you need at least level two or higher data to develop them effectively. Furthermore, they state that the current data available is not enough to create efficient price action strategies and that they may not be able to provide it anytime soon. When asked about legal regulations on trading currency pairs outside of MCX, the speaker states that one must have validation for investment or hedging purposes and that they don't know much beyond that.

  • 01:35:00 The speaker explained the process of combining technical indicators in a trading strategy and backtesting them using a demo account before implementing them in a real market situation. The speaker emphasized that traders should choose indicators that complement each other instead of those that are similar, and they should be aware of the significance of each indicator in the strategy. The demo account allows traders to test their strategy under different scenarios and assess its effectiveness before risking real funds.
Forex Trading Strategies | Develop and Backtest Trading Ideas | Full FX Tutorial
Forex Trading Strategies | Develop and Backtest Trading Ideas | Full FX Tutorial
  • 2018.08.16
  • www.youtube.com
In this joint session by FXCM & QuantInsti®, you’ll get to learn about the FX market data, trading strategies, backtesting & optimization techniques along wi...
 

How EPAT Can Help You! by Nitesh Khandelwal - June 28, 2018



How EPAT Can Help You! by Nitesh Khandelwal - June 28, 2018

Nitesh Khandelwal, the speaker, introduces himself and his company, ConTeSt, as a provider of algorithmic and quantitative trading education for the past eight years. He begins by sharing his personal background, starting from his engineering days to his experience in the banking industry. He then highlights the launch of the Executed Program Algorithmic Trading (EPAT), a six-month program that offers consulting, training, and a smooth transition towards trading in the high-frequency trading (HFT) domain. Khandelwal mentions his experience in Singapore, where he set up tests for exchanges worldwide and expanded the business on a global scale.

Moving on, Khandelwal discusses algorithmic trading and its growth in comparison to DIY (do-it-yourself) trading. He shares statistics indicating the significant rise of algorithmic trading in Asia, Europe, and the US, highlighting how traders now prefer making their own trading decisions rather than relying on brokers. However, he notes that while algorithmic trading constitutes a significant portion of market activity in India, retail participation remains relatively low. Khandelwal references an article from Bloomberg that explores the increasing role of robots in replacing finance jobs.

Khandelwal goes on to explain why retail traders have been unable to adopt algorithmic trading and suggests ways to ensure it becomes an enabler rather than a threat. He emphasizes the need for statistical and technical knowledge, access to quality market data and efficient brokers, and guidance from practitioners when transitioning to automation. He explains how EPAT was created to address these needs and provide guidance to individuals interested in algo trading or automating their strategies.

Next, Khandelwal discusses the features of EPAT. He mentions that the program offers rich content created by practitioners, domain experts, and leading fund managers. The curriculum is continuously updated to align with market requirements, and lifelong access to updated content is provided. EPAT includes a dedicated support team to resolve queries, faculty guidance for alumni, and a career cell that assists in job opportunities, setting up trading desks, finding relevant brokers and data vendors, and more. Additionally, EPAT participants gain access to exclusive features available only to them.

Khandelwal highlights the importance of the primer module in EPAT, which ensures that all participants start the course on the same page. The primer module covers the basics of Excel, Python, statistics, and financial markets, which are fundamental building blocks of algorithmic trading. He explains how the primer module evolves over time to provide maximum value extraction from the program. Furthermore, Khandelwal discusses the relevance of Python as the most widely used programming language in algorithmic and pawn trading, leading to its inclusion in the EPAT program.

The speaker then delves into the different modules covered in EPAT and how they are approached. The program covers data analysis and modeling in Python, advanced statistical methodologies, equity effects and futures strategies, and machine learning for trading. Khandelwal emphasizes the importance of understanding the infrastructure and operations behind trading strategies, as well as options trading strategies, portfolio optimization, and operational risk in algorithmic trading. He also highlights the significance of completing a project under the mentorship of a domain expert and taking the EPAT exam to obtain a verified certificate.

Khandelwal provides an overview of the EPAT certificate program, which spans over six months and includes over 100 hours of classroom connect, hands-on experience, and over 300 hours of coursework. He mentions the distinguished faculty members who teach the program, including practitioners, academics, and successful traders. The program offers placement opportunities and assists participants in CV and interview preparation, skill gap identification, and access to placement partners such as brokers and investment banks. EPAT participants also gain access to privileged brokerage data and API providers, as well as advanced backtesting tools like the Contra Blue simulator.

Furthermore, Khandelwal discusses the benefits of EPAT and how it adds value to participants. He mentions access to minute-level data for Indian markets and S&P 500 stocks, continued learning opportunities, career assistance, and alumni reunions. He emphasizes that EPAT goes beyond just a certificate and provides a fundamental quantitative dimension to existing skill sets. Khandelwal clarifies that EPAT focuses on teaching participants how to create and validate trading strategies rather than providing ready-made working strategies. He acknowledges that the success ratio of strategies varies depending on factors such as infrastructure access, risk management, and risk appetite.

Khandelwal addresses a question about whether technical analysts can automate their trading using strategies like MACD crossovers, moving averages, and RSI after studying EPAT. He confirms that the program covers these strategies, ensuring participants have the knowledge and tools to automate their trading.

The speaker then moves on to discuss the investments required to start one's own algorithmic trading desk and explains that the tax for analysts depends on the frequency of the desk. He mentions that EPAT primarily focuses on low and medium-frequency trading but also covers aspects of high-frequency strategies. The program combines Python, Excel, R, and MATLAB and requires programming skills and conceptual clarity. EPAT provides guidance for students to set up their own trading desks. While EPAT does not guarantee job placements, they offer guidance to alumni who seek it.

Khandelwal clarifies that while EPAT does not provide placement guarantees, they do offer counseling to ensure candidates have a basic understanding of algorithmic trading before enrolling in the program. He highlights the success of many actively seeking EPAT students in landing jobs or making career changes due to the program's extensive network of placement partners. He mentions that EPAT's learning management system provides lifetime access to all sessions and updated content, and the course requires a time commitment of approximately 300 hours, which can be spread out over three months by dedicating an hour daily. Khandelwal emphasizes that EPAT's focus on practical implementation sets it apart from more theoretical courses.

Khandelwal discusses the fee structure for the EPAT course, which is $4,720 for developed markets and INR 189,000 plus GST for India. He also mentions the need for brokers and APIs to code strategies and explains that participants can expect career assistance in Hong Kong, although the EPAT team has had more success in India and Singapore. He advises that while the EPAT modules are interdependent and should be taken as a whole, one to two hours of daily effort should be sufficient for those with limited trading knowledge. He concludes by mentioning that the EPAT course covers all types of trading strategy paradigms and offers remote work opportunities for participants and alumni.

In the closing remarks, the speaker highlights that the EPAT program is comprehensive and provides complete access to all modules, making it valuable for individuals with a technology background looking to enter the algorithmic trading field. They mention the various job opportunities available in the domain, with many cases of EPAT participants starting their own ventures or securing jobs with prominent firms after completing the program. The speaker emphasizes the importance of understanding basic statistics, correlation, and regression to succeed in this field. Lastly, they emphasize that automated trading strategies do generate profit and account for nearly 50% of overall volumes in India, indicating the significant potential for those interested in algorithmic trading.

  • 00:00:00 Nitesh Khandelwal introduces himself and his company, ConTeSt, which has been providing algorithmic and quantitative trading education for the past eight years. He also shares his personal background, starting from his engineering days to his experience in the banking industry and finally launching the Executed Program Algorithmic Trading (EPAT), a six-month program that provides consulting, training, and a smooth transition towards trading in the high-frequency trading (HFT) domain. Khandelwal also briefly discusses his experience in Singapore in setting up tests for exchanges worldwide and expanding the business from a global perspective.

  • 00:05:00 Nitesh Khandelwal talks about his experience with Core Density and how they are adding more value to their audience and participants in the quantitative trading industry. He then goes on to ask the audience if they have traded before and shares snippets from regulatory documents on the definition of algorithmic trading, such as the definition from the Securities Exchange Board of India and the MiFID II regulations in Europe. Khandelwal explains that systematic trading is considered algorithmic trading when it is automated and utilizes specific algorithms.

  • 00:10:00 Nitesh Khandelwal discusses algorithmic trading and compares it to DIY (do-it-yourself) trading. Algorithmic trading has grown significantly in Asia, from a couple of percentage points in 2004 to over 30% in 2016, and now represents 66% of trades in the US and 44% in Europe. The rise of algorithmic trading has been in proportion to the number of traders who are now making their own trading decisions and not relying on their brokers. However, while algorithmic trading makes up 30-45% of market activity in India, retail participation is estimated to only be around 2%. Khandelwal then mentions an article from Bloomberg highlighting how robots are increasingly replacing various roles in the finance job market.

  • 00:15:00 Nitesh Khandelwal explains why retail traders have not been able to adopt algorithmic trading and what can be done to ensure that it becomes an enabler rather than a threat. He highlights that quantitative or automated trading requires statistical and technical know-how, and access to quality market data and markets through efficient brokers. Practitioner guidance is also crucial when making the transition to automation, especially since there are many factors that traders need to be aware of. Khandelwal discusses how EPAT was created to address these needs and offers guidance to those who wish to pursue algo or automate their strategies.

  • 00:20:00 Nitesh Khandelwal discusses the features of EPAT. The program consists of rich content created by practitioners, domain experts, and leading fund managers. The curriculum is continuously updated to remain aligned with market requirements, and lifelong access to updated content is offered. The program provides a dedicated support team that resolves queries within a defined period, and alumni receive guidance from the faculty on resolving queries. EPAT features a career cell that assists in finding job opportunities, setting up trading desks, finding relevant brokers, data vendors, or collaborations, and more. Additionally, the program includes exclusive features available only to EPAT participants.

  • 00:25:00 Nitesh Khandelwal discusses how the primer module plays a crucial role in making sure that everyone who participates in the EPAT program is on the same page before starting the course. The primer module covers the basics of Excel, Python, statistics, and financial markets, which are the building blocks of algorithmic trading. Khandelwal explains how the primer evolves to become more interactive with time to ensure maximum value extraction from the program. Moreover, Khandelwal sheds light on how Python became the most relevant programming language in the algorithmic trading and pawn trading world in the last few years, which is the reason they replaced C++ and Java with Python in their program.

  • 00:30:00 Nitesh Khandelwal discusses the different modules that are covered in EPAT and how they are approached. The first module involves data analysis and modeling in Python, which covers topics such as how to get data using different API's, how to analyze and use the data in your strategy, and how to code the strategy and send out orders. The module then proceeds to cover more advanced statistical methodologies, such as ARIMA, ARCH models, and Gaussian mixture models. Following this, the equity effects and futures strategies module is introduced, which covers different execution strategies, optimization, and momentum/ statistical arbitrage. The video concludes with a discussion on machine learning for trading, which has become increasingly popular and regularly covered in the EPAT course.

  • 00:35:00 Nitesh Khandelwal explains the importance of understanding the infrastructure and operations behind trading strategies, using the analogy of an F1 car race driver needing to understand the internals of their car. He also covers topics such as options trading strategies from a risk management perspective, portfolio optimization, and the importance of operational risk in algorithmic trading. Additionally, he emphasizes the significance of completing a project under the mentorship of a relevant domain expert and taking the EPAT exam to obtain a verified certificate for industries.

  • 00:40:00 Nitesh Khandelwal discusses the EPAT certificate program, which is comprised of 100 plus hours of classroom connect, hands-on experience, and over 300 hours of coursework to be completed over a six-month period. The program is taught by a group of distinguished faculty members who have contributed significantly to the algorithmic trading industry, with a mix of practitioners, academics, and successful traders among their ranks. The faculty members bring in leading industry experts as guest lecturers, and the EPAT certificate program includes placement opportunities as well.

  • 00:45:00 Nitesh Khandelwal discusses the ways in which EPAT program can help its participants, including CV and interview preparation, identifying skill gaps and filling them, and providing access to placement partners such as major brokers and investment banks. EPAT participants also have access to privileged brokerage data and API providers, with some offering free trading for a limited time. The program offers recognition and value-adds to its graduates through exclusive events and sessions, subsidized access to brokers and APIs, and advanced back testing tools like the ContraBlue simulator.

  • 00:50:00 Nitesh Khandelwal explains some of the benefits of EPAT, such as access to minute level data for Indian markets and S&P 500 stocks, continued learning, career assistance, and alumni reunions. He emphasizes that EPAT is more than just a certificate and that it adds a fundamental quantitative dimension to existing skill sets. Khandelwal also clarifies that EPAT is not about giving out working strategies but about learning how to create and validate them. He addresses a question about the success ratio of strategies and explains that it varies from person to person depending on factors such as access to infrastructure, risk management, and risk appetite. Lastly, Khandelwal answers another question about whether technical analysts can automate their trading using strategies like MACD crossovers, moving averages, and RSI after studying EPAT, to which he confirms that it is covered in the program.

  • 00:55:00 Nitesh Khandelwal discusses the investments required for starting own algorithmic trading desk and how the tax for analysts will depend on the frequency of the desk. He mentions that EPAT mainly focuses on low and medium frequency ways of trading but has some aspects of high-frequency strategies as well. The trading course combines Python, Excel, R and MATLAB. The program requires programming skills and conceptual level clarity and provides guidance for students to set up their own desks. While EPAT does not provide any guarantee of job placements, they do offer guidance to alumni who seek it.

  • 01:00:00 Nitesh Khandelwal clarifies that while EPAT does not offer any guarantees for placements, they do provide counseling to ensure that candidates have a basic idea of algorithmic trading before enrolling for the program. EPAT has been successful in assisting many of its actively seeking students in landing a job or making a job change due to their vast network of close to a hundred placement partners who value the knowledge and practical implementation skills taught in this niche and practitioner-oriented course. EPAT’s learning management system provides lifetime access to all sessions and updated content, and the course requires a time commitment of about 300 hours, which can be spread out over three months by dedicating an hour daily. Khandelwal emphasizes that EPAT’s focus on the practical implementation of algorithmic trading sets it apart from other courses that are more theoretical.

  • 01:05:00 Nitesh Khandelwal discusses the fee structure for the EPAT course, which is $4,720 for developed markets and INR 189,000 plus GST for India. He also mentions the need for brokers and APIs to code strategies and explains that participants can expect career assistance in Hong Kong, but the EPAT team has had more success in India and Singapore. Khandelwal emphasizes that the EPAT modules are interdependent and should be taken as a whole, but states that one to two hours of daily effort should be sufficient for those with very little trading knowledge. The EPAT course covers all types of trading strategy paradigms and offers remote work opportunities for participants and alumni.

  • 01:10:00 The speaker talks about how the EPAT program is comprehensive and provides complete access to all modules, making it valuable for participants with a technology background looking to enter the algorithmic trading field. They highlight the different job opportunities available in the domain, with numerous cases of participants starting their own ventures or getting jobs with big firms after completing the program. Additionally, the speaker emphasizes the importance of knowing basic statistics, understanding correlation, and regression to succeed in this domain. Finally, they suggest that automatic trading strategies do make money and account for almost 50% of the overall volumes in India, indicating that this field has significant potential for those interested in it.

  • 01:15:00 Nitesh Khandelwal discusses the EPAT program, a practice-oriented program taught by working practitioners in the markets around the world. He advises novice traders to read and learn more about the markets and suggests going through the blogs and webinars provided by the EPAT program. He also mentions that MCX allows algo trading and discusses the infrastructure requirements needed for setting up one's own trading desk, as it depends on the frequency of trading and regulatory requirements. Additionally, Khandelwal mentions that EPAT has alumni in parts of Nigeria and Africa and advises those interested in the program to connect with the business team for more information.

  • 01:20:00 The speaker explains that they offer a learning management system for their online program, where enrolled students can access all lectures, recordings, tests, quizzes and assignments. The program is completely online, so there is no mandatory classroom attendance. The salary for freshers depends on their background, skills and academic pedigree, but in India, it can range from 500,000 to 2 million rupees per year. The program covers back testing on different platforms and supports complete automation. The course is instructor-led and happens on a weekly basis. While it is not self-paced, students will have access to recordings and can revisit them at a later stage if they miss a lecture. The speaker also suggests that there may be remote job opportunities available.

  • 01:25:00 Nitesh Khandelwal answers a few final questions about the EPAT program. One question asks about the number of trainers based in India, and Khandelwal estimates that around 50% are from India, with the rest coming from various countries around the world. Another question asks about whether there are any brokers or institutions based in the UK working with the program, and Khandelwal confirms that there are brokers based in the UK. He encourages viewers to reach out to the EPAT team if they have any further questions or need assistance. Overall, Khandelwal emphasizes the value of the EPAT program in helping individuals achieve their career and learning objectives.
How EPAT Can Help You! by Nitesh Khandelwal - June 28, 2018
How EPAT Can Help You! by Nitesh Khandelwal - June 28, 2018
  • 2018.06.29
  • www.youtube.com
If you've been looking to build a career into the quantitative and algorithmic trading domain, there is a high probability that you would have heard about th...
 

AMA on Algorithmic Trading | By Nitesh Khandelwal



AMA on Algorithmic Trading | By Nitesh Khandelwal

In this "ask me anything" session on algorithmic trading, Nitesh Khandelwal, co-founder of algo trading firm Eragy, welcomes the audience and shares his expertise on the topic. The session aims to cover various aspects of algorithmic trading, including platforms and brokers, trading strategies, market data, job opportunities, setting up an algo trading desk, regulations, the future of algo trading, and learning and education opportunities. Khandelwal mentions that the session will strike a balance between pre-prepared questions and live questions, and they also offer individual follow-up sessions for unanswered queries.

The presenter begins by explaining different trading strategies such as low frequency, medium frequency, and high frequency trading. These strategies are defined based on the latency of the trading infrastructure and order processing time. The focus is on emphasizing that the latency of the trading strategy is more important than the number of trades executed per second. The section then delves into where to obtain market data and economic data, discussing different data vendors such as Yahoo Finance, Google Finance, Quandl, Alpha Vantage, and FXCM. These vendors offer either downloadable data or data that can be used on their platforms.

Moving on, the speaker discusses the sources of data for algorithmic trading, including manual downloads, API fetching, and paid vendors such as Quandl, Global Data Feed, Trading Economics, Thomson Reuters, and Active Financial. They also address the question of whether high-frequency traders (HFT) generally outperform manual day traders, explaining that it depends on the type of day traders being analyzed. If traders are taking advantage of arbitrage opportunities or market inefficiencies, machines may be faster than manual traders. However, if traders are analyzing data and executing manual orders after thorough research, machines are not necessarily more efficient. The speaker dismisses the idea that an excessively algo-traded market is counterproductive, clarifying that automation does not always require high-frequency trading.

The concept of using algorithms in trading, known as "elbows," is explained. It involves trading with more efficiency and can be automated and quantified using mathematical formulas. However, finding market inefficiencies can be challenging, and competition in high-frequency trading and technology infrastructure is becoming more expensive. The speaker also addresses the question of how to handle multiple strategies in a brokerage account for an FBI.

The prerequisites for algorithmic trading are discussed, involving knowledge of statistics and econometrics, financial computing, and quant trading. The presenter mentions that those starting from scratch can learn about these pillars through freely available resources on the Quant website. For traders already familiar with trading strategies and looking to automate, they can begin by using a broker API and eventually build their own platform. The speaker also explains the various data providers for tick data and mentions that while most vendors provide snapshot data, higher-end vendors can provide true tick data at a higher cost. Lastly, it is noted that for traders who are already successful with their current trading strategies, learning algo trading may not be necessary unless they want to keep upgrading and experimenting.

The benefits of automating trading strategies are discussed, including controlling emotions, scalability, and bandwidth to work on strategies while machines handle execution. The speaker emphasizes the importance of having a programming background for success in algorithmic trading and highlights that Python is widely used by most firms globally. However, the speaker advises that high-frequency trading is not suitable for retail traders, and some strategies may require a reasonable amount of capital before seeing success. Nonetheless, even with basic knowledge of Python, one can get started in algorithmic trading.

The skills required to become an algorithmic trader are discussed, including knowledge of statistics, econometrics, and trading strategies. The speaker also explains the various career opportunities in algorithmic trading, ranging from back-office roles to front-office trading roles. They mention that individuals with a software and data science background can venture into algo trading, as their background already provides a strong foundation and picking up the financial market side should be relatively easier. The speaker also mentions a blog about a 40-year-old alumnus of QuantInsti who successfully transitioned into algorithmic trading without prior trading experience. QuantInsti is highlighted as an institution that offers a dedicated career cell to help individuals acquire the necessary skills and connect with the right people to advance in their careers.

The speaker proceeds to discuss algorithmic trading languages and their significance in research and analysis. While high-frequency trading firms prefer using C++ for lower latency, for backtesting and strategy evaluation, R and Python are more popular choices. In response to a user's question about improving hit ratio and managing back-to-back losses, the speaker suggests optimizing parameters in backtesting and utilizing in-sample and out-of-sample trading to check for drawdown. Market saturation is also addressed, with the speaker stating that the HFT ratio serves as an indicator of competition and that plain vanilla arbitrage strategies may not be successful in highly saturated markets.

Different algorithmic trading strategies are further explored, highlighting the need for a strong technology infrastructure for plain vanilla arbitrage and market making strategies. The speaker engages with various audience questions, including the meaning of total bit quantity, the impact of HFTs on traditional traders in India, and the time horizon used to analyze data for algo trading. They explain that the time horizon depends on the trading frequency. Additionally, the speaker encourages individuals with a software and data science background to venture into algo trading, stating that their background already provides a strong foundation, and picking up on the financial market side should be relatively easier.

Nitesh Khandelwal addresses several questions related to the possibility of setting up a trading platform with their company, legal approval for automation, costs, and Indian market regulations. They clarify that their company provides guidance and lifelong support to participants and alumni but does not offer consulting services. Automation is possible, and the costs depend on the required infrastructure. In countries like India, each trading strategy needs approval before automation, and only the broker can do that on behalf of the trader. The usage of stochastic and fundamental indicators in strategies is discussed, mentioning that they can be used manually or through software. The speaker also mentions the availability of tools for reading machine-readable news and economic data to create algorithms.

The session delves into whether people in India can engage in high-frequency trading (HFT) for non-Indian markets and whether HFT drives retail traders away from the markets. Regarding non-Indian markets, it is explained that sending money for trading margin products listed on foreign exchanges is not permitted under the LRS scheme unless one has RBA approval. However, if a global company outsources some of its trading to an Indian company, then it could be possible. Regarding the impact of HFT on retail traders, it is mentioned that the presence of HFTs adds liquidity to the market and tightens spreads, which benefits retail traders. However, illegal activities like front running should not be permitted, irrespective of the domain.

The speaker emphasizes that high-frequency trading (HFT) does not harm individual retail traders, as they typically use web-based browsers that inherently have a built-in latency of a few hundred milliseconds. Even if HFT firms use illegal methods to gain faster access, it would not impact the retail trader but harm other HFT firms that follow the rules. The speaker emphasizes that retail traders generally benefit from the efficient market created by HFT, as it eliminates arbitrage opportunities. The speaker also addresses a question about learning algorithmic trading in English and discusses a few important components for consistently profitable trading.

The video underscores the importance of continuously evolving trading strategies in the algorithmic trading industry, as markets constantly change. While not many brokers in India support algorithmic trading, some do offer programmatic trading options such as semi-algo or el-go. The speaker also discusses the job market for quant analysts, highlighting that it is not exclusive to PhDs but rather depends on individuals' knowledge and problem-solving skills. The hardware and infrastructure requirements for algorithmic trading are addressed as well. For low-frequency trading, a decent laptop or cloud computing options provided by companies like Amazon and Google are sufficient. Medium-frequency trading requires an algorithmic trading platform and a specialized server, which can cost a few thousand dollars. High-frequency trading demands a specialized server ranging from $10,000 to $25,000.

The speaker explains the approvals required before going live, which depend on the exchange and location. They clarify that the EPAT program covers a comprehensive range of topics and focuses on practical learning, although it does not guarantee profitable strategies. The different types of algorithms used in automated trading are discussed, including low, medium, and high-frequency algorithms. High-frequency algorithms are utilized for arbitrage, market making, and directional strategies that require faster computing. Low and medium-frequency algorithms can automate various strategies, including fundamental investing. Popular strategies like momentum, statistical arbitrage, and option-based strategies are also mentioned, with algorithms providing benefits such as scalability, emotional control, and better analysis of big data.

For retail traders interested in algorithmic trading but lacking programming experience, the speaker suggests starting with learning basic statistics and trading strategies. They provide resources for self-paced learning. Nitesh Khandelwal emphasizes the idea of creating one's own trading strategy rather than relying on pre-existing ones. They also touch upon the role of algo trading in the cryptocurrency market, stating that while some participants use automation tools for trading cryptocurrencies, algo trading is not the sole reason behind the cryptocurrency boom. The potential impact of artificial intelligence and machine learning on algo trading is mentioned, with the speaker highlighting that it will empower individual and retail traders alongside big institutions due to the affordability of computing power required for training algorithms.

The speaker further discusses the expected increase in retail participation in algorithmic trading due to the changes and automation happening in the financial sector. They address questions from the audience about resources for balance sheet data, transitioning from a non-finance firm to an algorithmic trader, and the ideal numbers for CAGR (Compound Annual Growth Rate) and winning ratio in algorithmic trading. The speaker cautions against solely focusing on percentage returns and instead emphasizes scalability, strong infrastructure, and technology as important considerations.

The session concludes with the speaker discussing the importance of considering risk when discussing returns and the investment required to start an algo trading business, which can range from a few thousand dollars to hundreds of thousands depending on the frequency and type of infrastructure needed. The speaker mentions that automation and risk management are key factors to consider when starting an algo trading business. They also provide insights into real-time data availability in India and the approval process for trading strategies, emphasizing that exchanges prioritize risk management over the specifics of the strategy. Finally, the speaker acknowledges the scarcity of good websites for back-testing and writing lefty (leveraged and intraday) strategies in Indian markets.

In the last segment, the speaker discusses the development of tools for different markets at Horn Insights, aiming to provide better exposure and benefits to participants and users. They address a question about the salary range for quants in India, noting that it depends on factors such as experience and background. The speaker emphasizes that colocation is not manipulation and compares it to paying for air travel to reach a destination faster compared to traveling by train. They also mention that most technical indicator-based strategies can be developed using Python and highlight that while advanced programs in the algorithmic trading domain are not widely available, lifelong guidance is provided through the ANNIE pat program.

In the final moments of the video, the speaker encourages individuals to pursue algorithmic trading and mentions that the market has evolved significantly over the years, becoming more accessible to retail traders. They invite viewers to explore the resources available at QuantInsti and Horn Insights to further their knowledge and understanding of algorithmic trading.

  • 00:00:00 Nitesh Khandelwal, co-founder of algo trading firm Eragy, welcomes the audience to an "ask me anything" session on algorithmic trading. Khandelwal has experience in consulting big institutions to set up their own algo trading desks and will be sharing his expertise on the topic. The session will cover popular questions on topics such as platforms and brokers, trading strategies, market data, job opportunities, setting up an algo trading desk, regulations and the business environment, the future of algo trading and learning and education opportunities. The session aims to strike a balance between pre-prepared questions and live questions, and will also offer individual follow-up sessions for questions that cannot be answered during the session.

  • 00:05:00 The presenter explains different trading strategies such as low frequency, medium frequency, and high frequency trading, and how these strategies are defined based on the latency of the trading infrastructure and order processing time. The presenter emphasizes that the latency of the trading strategy is more important than the number of trades executed per second. The section then covers where to get market data and economic data from different data vendors such as Yahoo Finance, Google Finance, quanti ex parte del alpha Vantage fxcm. The presenter notes that these vendors offer either downloadable data or data that can be used on their platform.

  • 00:10:00 The speaker discusses the sources of data that can be used for algorithmic trading. Data can be obtained through manual downloads, API fetching, or paid vendors such as Qantas Global Data Feed Trading Economics, Thomson Reuters, and Active Financial. The question of whether HFT or elbow traders generally beat manual day traders depends on the type of day traders being analyzed. If traders are taking advantage of arbitrage opportunities or market inefficiencies, machines may be faster than manual traders. However, if traders are analyzing data and executing manual orders after thorough research, machines are not necessarily more efficient. The idea that an excessively algo-traded market is counterproductive is unfounded as automation does not always require high-frequency trading.

  • 00:15:00 The speaker explains the concept of using elbows in trading, which involves trading with more efficiency and can be automated and quantified using mathematical formulas. However, finding inefficiencies in the market can be challenging, and competition in high-frequency trading and technology infrastructure is becoming more expensive. Technical indicators and patterns can be quantified and automated, but algorithms can become much more complex when subjectivity is involved, such as in Elliott Wave. The speaker also addresses a question on how to handle multiple strategies in a brokerage account for an FBI.

  • 00:20:00 The speaker discusses the prerequisites needed for algorithmic trading, which typically involve three major pillars of statistics and econometrics, financial computing, and quant trading. Those starting from zero can learn about these pillars through various resources, such as those available freely on the quant website. For traders already familiar with trading strategies and looking to automate, they can begin by using a broker API and eventually build their own platform. In terms of data providers for tick data, most vendors provide snapshot data instead, although higher-end vendors can provide true tick data for a higher cost. Finally, for traders who are already successful with their current trading strategies, there may not be a need to learn algo trading unless they want to keep upgrading and experimenting.

  • 00:25:00 The speaker discusses the benefits of automating trading strategies such as controlling emotions and having scalability and bandwidth to work on strategies while the machines handle execution. The speaker advises that having a programming background is essential to have success in algorithmic trading and mentions that most firms across the globe use Python. However, the speaker states that HFT is not suitable for retail traders, and some strategies may require a reasonable amount of capital before seeing success. Nonetheless, even with basic knowledge of Python, one can get started in algorithmic trading.

  • 00:30:00 The speaker discusses the skills required to become an algorithmic trader, including knowledge of statistics, econometrics, and trading strategies. The speaker also explains the various career opportunities in algorithmic trading, ranging from back-office roles to front-office trading roles. For those seeking career opportunities with 10 to 20 years of domain expertise but no trading experience, the speaker shares a blog about a 40-year-old alumnus of QuantInsti who was able to transition into algorithmic trading successfully. Additionally, QuantInsti has a dedicated career cell that can help individuals acquire the necessary skills and connect with the right people to advance in their careers.

  • 00:35:00 The speaker talks about algorithmic trading languages and the importance of programming in research and analysis. He explains that high-frequency trading firms prefer to use C++ for its lower latency, but for backtesting and strategy evaluation, R and Python are more popular. In response to a user's question about improving hit ratio and back-to-back losses, he suggests optimizing parameters in backtesting and using in-sample and out-of-sample trading to check for drawdown. When discussing market saturation, he states that HFT ratio is an indicator of competition and that plain vanilla arbitrage strategies may not be successful in markets with high HFT ratios.

  • 00:40:00 The speaker discusses different algorithmic trading strategies, emphasizing the need for a strong technology infrastructure for plain vanilla arbitrage and market making strategies. The speaker also answers various audience questions, including the meaning of total bit quantity, the impact of HFTs on traditional traders in India, and the time horizon used to crunch data for algo trading, which he explains depends on the trading frequency. Furthermore, the speaker encourages individuals with a software and data science background to venture into algo trading, stating that their background already provides a strong foundation and that picking up on the financial market side should be relatively easier.

  • 00:45:00 Nitesh answers several questions related to the possibility of setting up a trading platform with their company, legal approval for automation, costs, and Indian market regulations. They provide guidance and lifelong support to their participants and alumni but do not offer consulting services. Automation is possible, and the costs depend on the infrastructure required. In countries like India, each trading strategy needs approval before automation, and only the broker can do that on behalf of the trader. Stochastic indicators can be used in any strategy, and fundamental indicators can be fed both manually or through software. There are tools to make it easier to read machine-readable news and economic data for creating algorithms.

  • 00:50:00 It is discussed whether people in India can do high-frequency trading (HFT) for non-Indian markets and also whether HFT drives retail traders away from the markets. For non-Indian markets, it is mentioned that under the LRS scheme, it is not permitted to send money for trading margin products listed on foreign exchanges unless one has RBA approval. However, if a global company outsources some of its trading to an Indian company, then it could be possible. On the question of whether HFT drives retail traders out of the market, it is mentioned that the presence of HFTs adds liquidity to the market and tightens spreads, which benefits retail traders. However, illegal activities like front running should not be permitted, irrespective of the domain.

  • 00:55:00 The speaker discusses how high-frequency trading (HFT) does not harm the retail traders at the individual level because they use web-based browsers that have a built-in latency of a few hundred milliseconds. Even if HFT firms use illegal methods to get faster access, it would not impact the retail trader but harm other HFT firms who follow the rules. The speaker points out that retail traders typically benefit from the efficient market that HFT creates as it eliminates the arbitrage opportunities. The speaker also addresses a question about learning algorithmic trading in English and talks about a few important components for consistently profitable trading.

  • 01:00:00 The video discusses the importance of continuously evolving one's trading strategy in the algorithmic trading industry as markets constantly change. While not many brokers in India support algorithmic trading, some do offer programmatic trading that allows the use of certain programs like semi-algo or el-go. The job market for quant analysts is not exclusive to PhDs, but rather relies on knowing one's stuff and having problem-solving skills. The video also covers hardware requirements necessary for algorithmic trading, which depend on the type of trading being done, but generally, a decent laptop or desktop is sufficient.

  • 01:05:00 The speaker discusses the hardware and infrastructure requirements for algorithmic trading. For low-frequency trading, a decent laptop or cloud computing options provided by companies like Amazon and Google will suffice. For medium-frequency trading, an algorithmic trading platform is required, and a specialized server would cost a few thousand dollars, with high-frequency trading requiring a specialized server costing $10,000 to $25,000. The speaker also explains the approvals required before going live, which depend on the exchange and location. Finally, the speaker clarifies that the EPAT program covers a comprehensive range of topics and is focused on practical learning but does not guarantee profitable strategies.

  • 01:10:00 The speaker discusses the different types of algorithms that can be used for automated trading, including low, medium, and high frequency algorithms. High frequency algorithms are used for arbitrage, market making, and directional strategies that require faster computing. On the other hand, low and medium frequency algorithms can automate different strategies, including fundamental investing. The speaker also mentions popular strategies like momentum, statistical arbitrage, and option-based strategies, and highlights that using algorithms can benefit trading by providing more scale and emotional control, and by allowing for better analysis of big data. For retail traders who are interested in algorithmic trading but do not have programming experience, the speaker suggests starting with learning basic stats and trading strategies, and provides resources for self-paced learning.

  • 01:15:00 Nitesh Khandelwal discusses the idea of using standard trading strategies and emphasizes the importance of creating one's own strategy rather than relying on pre-existing strategies. He also talks about the role of algo trading in the cryptocurrency market, stating that while there are some participants using automation tools to trade cryptocurrencies, algo trading is not the reason behind the cryptocurrency boom. Khandelwal also touches on the potential impact of artificial intelligence and machine learning on algo trading, stating that it will give individual and retail traders more power in addition to big institutions due to the affordability of computing power required for training algorithms.

  • 01:20:00 The speaker discusses the expected increase in retail participation in algorithmic trading due to the changes and automation happening in the financial sector. The speaker also addresses questions from the audience about resources for balance sheet data, transitioning from a non-finance firm to an algorithmic trader, and the best numbers for CAGR and winning ratio in algorithmic trading. The speaker cautions against solely focusing on percentage returns and instead encourages scalability and strong infrastructure and technology.

  • 01:25:00 The speaker discusses low and medium frequency trading strategies and the Sharpe ratio, stating that returns cannot be discussed without considering risk. He also mentions the investment required for starting an algo trading business, which can range from a few thousand dollars to hundreds of thousands depending on the frequency and type of infrastructure required. Additionally, the speaker mentions that automation and risk management are key considerations when starting an algo trading business. Regarding data, real-time data is possible without colocation in India, but there may be a delay of a few milliseconds. The speaker also discusses the approval process for strategies and assures listeners that exchanges generally focus more on risk management than on the specifics of the strategy. Finally, the speaker mentions that there are not many good websites for back-testing and writing lefty strategies in Indian markets.

  • 01:30:00 The speaker discusses the development of tools for different markets at Horn Insights to provide better exposure and benefits for participants and users. They also answer a question regarding the salary range for quants in India, which depends on factors such as experience and background. The speaker emphasizes that colocation is not manipulation and compares it to paying for air travel to reach a destination faster than traveling by train. Additionally, they suggest that most technical indicator-based strategies can be developed using Python and note that there are not many advanced programs offered in the algorithmic trading domain but Lifelong guidance is available through ANNIE pat.

  • 01:35:00 The speaker addresses any remaining questions or concerns that viewers may have. They reassure viewers that if they have any other doubts, they should feel free to reach out for help and that they are happy to give responses to all queries. The speaker concludes by thanking the audience for attending the session and making an effort to address as many queries as possible.
AMA on Algorithmic Trading | By Nitesh Khandelwal
AMA on Algorithmic Trading | By Nitesh Khandelwal
  • 2017.12.06
  • www.youtube.com
In this is session on AMA on Algorithmic Trading, get answers to all your question from "What is Algorithmic Trading?" and "How you can pursue it?"********Le...
Reason: