
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Algorithmic Trading in Commodity Markets
Algorithmic Trading in Commodity Markets
Sunil Lani, Assistant Vice President at NCDEX (National Commodity and Derivatives Exchange), takes the opportunity to delve into the world of algorithmic trading in commodity markets, specifically focusing on agricultural commodities. NCDEX, being the largest agricultural exchange in India, offers a diverse range of approximately 20 commodities for trading.
Lani begins by introducing the three popular trading styles commonly employed in commodity markets: hedging, arbitrage, and directional trading. He highlights hedging as an investment strategy used to mitigate risk associated with a primary investment. In the context of NCDEX, farmers often hedge their underlying agricultural assets to minimize risk exposure.
Moving on, the speaker shifts the discussion towards two types of trading strategies prevalent in commodity markets: hedging and arbitrage. Lani emphasizes the significance of highly correlated underlying assets in hedging strategies. For arbitrage trading, he delves into two specific approaches: calendar spread and pair trading, noting that the latter shares similarities with hedging strategies. Lani emphasizes the importance of selecting highly correlated and cointegrated commodities for pair trading, suggesting the application of the T Fuller test to ensure the validity of correlations.
In addition, Lani provides an overview of the various stages involved in algorithmic trading. He explains that the process begins with identifying and filtering out appropriate scripts or instruments to apply the trading concept. Subsequently, the model is visualized, followed by rigorous backtesting and optimization of parameters or the model itself. The next steps involve paper trading and eventually transitioning to live trading, where real money is at stake.
Continuing his discussion, Lani focuses on the initial steps of algorithmic trading. He emphasizes the importance of brainstorming trading ideas and finalizing a trading logic that aligns with the trader's objectives. Key considerations include determining the frequency of trades, selecting the appropriate segment for trading, and establishing the backtesting periods. To illustrate the challenges of understanding data for trading strategies, the speaker presents data on India's Gross Domestic Production (GDP) across various sectors. He converts the data into graphical representations, facilitating a better understanding and suggests examining correlations with price movements. Furthermore, Lani showcases visual representations of historical agricultural data, emphasizing the importance of analyzing data from multiple perspectives.
The speaker proceeds to discuss the resources required for algorithmic trading in commodity markets. He categorizes trading strategies into two main areas: arbitrage and momentum. Techniques such as pair trading, correlation analysis, moving averages, and probability distribution are commonly employed. Infrastructure is a crucial aspect of algorithmic trading, including connectivity to a broker through an API and hosting the algorithm either in the cloud or on-premise. Lani also highlights the significance of data visualization and technical indicators, which can be accomplished using tools like Excel, Tableau, Power BI, and TradingView.
Lani further explores various tools and platforms suitable for algorithmic trading in commodity markets. He mentions that non-programmers or semi-programmers often opt for platforms like Metatrader and Interactive Brokers. For pure programming purposes, Python emerges as the leading language, with Python-based algorithmic trading platforms such as Quantopian, Blueshift, QuanTX, and Zerodha gaining popularity. Additionally, the speaker highlights essential libraries for data processing and backtesting, including Pandas, Numpy, Beautifulsoup, Backtrader, as well as sentiment analysis libraries like Stream Python, Feedparser, Peopie, and NLP.
In the subsequent segment, Lani explains the process of generating a trading idea and designing a model using agricultural commodities as an example. Given that agricultural commodities tend to be less volatile than equities or Forex, he proposes applying a mean reversion strategy using Bollinger Bands as an indicator, specifically set at two standard deviations from the mean price range. The filtering criteria for selecting a liquid commodity involve choosing one with a volume of at least 1080, and Lani recommends trading Jana in the NCDX. To visualize the model, Lani suggests using investing.com to draw the Bollinger Bands, with different levels indicating the buy and sell points.
Shifting the focus to backtesting, Lani emphasizes its importance in verifying the logic of an algorithmic trading model using historical data. This step is crucial to avoid potential losses when the model is deployed in a live environment. Lani explains the steps involved in backtesting, which include downloading data from an open portal, importing relevant libraries, writing supporting functions, generating buy and sell signals, visualizing the output, and evaluating the return generated by the strategy. He also suggests considering parameters such as returns, maximum drawdown, maximum profit, and stop-loss during the backtesting process. Lani advises using personal backtesting functions instead of relying solely on libraries obtained from platforms like Github.
The speaker proceeds to explain the various parameters a function takes in to generate buy and sell signals based on data frames, strategy types, entry and exit criteria, and positional feed. Traders can configure the open or closed price for their calculations, as well as set stop-loss and target percentages. Lani also discusses a statistical reporting function and another function that creates levels using standard deviation for a chosen indicator. Finally, the main function invokes these other functions to return buy and sell signals based on the chosen strategy and generate a summary.
Moving forward, Lani demonstrates how to generate trading backtesting reports using BV practice positional skill. The output includes a data frame containing all the trades, transaction charges, and slip edges. The backtesting function is invoked, and the reports are generated. These reports provide statistics and graphical representations of the output, showcasing the percentage returns, transaction details, and cumulative returns over a specified time period. Lani analyzes the report and suggests setting a stop-loss at around -1.5 to avoid losses exceeding -2% or -3%. The maximum profit obtained from the backtesting results was 8%, indicating that the stop-loss can be set at a maximum of 8% or 9%.
The speaker then discusses the process of optimizing an algorithm. Lani explains that one approach to optimization involves creating another algorithm that runs the original algorithm multiple times using different sets of parameters. To illustrate this, he provides an example where the look-back period for a rolling back period is optimized. By creating a list of various values for the look-back period and utilizing a combination function, a comprehensive list of all parameter sets can be generated. Lani emphasizes the importance of optimizing algorithms to enhance their performance in commodity markets.
Continuing the discussion on optimization, Lani explains the process of using three lists to evaluate each pair through the elbow method with different parameters for backtesting. The backtesting results are stored in a data frame called DF optimizer, allowing the identification of the combination that yields the maximum returns. The optimized variables are then stored in the optimized role. Lani cautions against overfitting the data during the optimization process and highlights the importance of running the same parameters on the next period to ensure their accuracy. Finally, the speaker downloads the report to examine the results.
Lani proceeds to present the code utilized for optimizing trading parameters and shares the resulting statistics, including returns, mean returns, maximum drawdown, and win-loss ratio. The optimized parameters resulted in a return of 22.8%, a significant improvement compared to the 9% achieved with the previous parameter combination. Lani underscores the importance of paper trading to test algorithms without risking real money and emphasizes the need for diversification, portfolio management, and risk management when transitioning to live trading. He concludes by noting the similarities between the development process of algorithmic trading and the software product development lifecycle, emphasizing the importance of executing all stages diligently to ensure project success.
Predict Trends In Stock Markets Using AI And Python Programming
Predict Trends In Stock Markets Using AI And Python Programming
This webinar session offers a hands-on learning tutorial focused on predicting trends using AI in the stock market. Participants will actively engage in creating a classification tree model using a Jupyter Notebook. The primary objective is to develop a classification tree that can serve as a tool for establishing trading rules based on the anticipated positive or negative future returns.
Utilizing a decision tree model in trading is an essential machine learning technique that provides an immersive and interactive learning experience. During the session, attendees will have the opportunity to work directly on a Python notebook alongside an instructor.
The webinar aims to cover the following key areas:
The recorded session delves into how the decision tree model can be leveraged in trading to extract valuable trading rules. These rules serve as a foundation for making informed decisions on when to buy or sell securities.
Throughout the video, participants will acquire knowledge on:
To fully benefit from this webinar, attendees should possess:
Regarding variables, the predictor variables in this context refer to the technical indicators employed to predict market trends. On the other hand, the target variable signifies the expected trend for the following day, specifically whether it will be positive or negative.
Quantitative Portfolio Management Strategies By Prodipta Ghosh - July 23, 2019
Quantitative Portfolio Management Strategies By Prodipta Ghosh - July 23, 2019
Prodipta Ghosh, Vice-President of Quantitative Portfolio Management, emphasizes that there is no one-size-fits-all strategy for stock trading due to the presence of uncertainties in financial markets, the dynamic nature of the market over time, and the varying goals and risk appetites of individuals. He highlights that even with a perfect vision or model of the world, it would be impossible to provide answers to traders' questions as each person operates within a unique context. Therefore, no perfect strategy exists for anyone in the world.
During his presentation, Prodipta Ghosh delves into four quantitative portfolio management strategies. These strategies include utilizing Bollinger Bands, employing a simple moving average crossover strategy, analyzing the doji candlestick pattern, and incorporating the Relative Strength Index (RSI). While a high Sharpe ratio may theoretically suggest the best strategy, past performance cannot always guarantee future results. Hence, it is crucial to construct a portfolio that encompasses diverse strategies and assets to mitigate risk and avoid significant drawdowns. Ghosh demonstrates the benefits of equally allocating capital to all four strategies, showcasing how a diversified portfolio can withstand market volatility and prevent substantial losses.
Prodipta Ghosh provides an explanation of the fundamentals of portfolio management and distinguishes it from investing in a single stock. Portfolio management entails developing a strategy for multiple strategies or assets, taking into account risks, uncertainties, the passage of time, and specific contexts. The value of a strategy is derived from the underlying returns multiplied by positions, while the portfolio value is determined by the weighted stream of underlying returns. To optimize portfolio management, a mathematical problem is solved by defining a function U that is dependent on the portfolio value P, and finding the weights W that maximize U. Different optimization strategies, such as mean-variance optimization, Kelly optimization, and risk penalty optimization, can be employed based on how U is defined and the optimization approach.
The speaker proceeds to discuss quantitative portfolio management strategies and the role of optimization problems in the process. He explores the various constraints that can be specified in an optimization problem, such as limiting the range of a portfolio, and the types of portfolios that can be constructed, including those based on alpha strategies, factor portfolios, or collections of individual stocks. The objective is to define a maximization condition that results in a portfolio with maximum value or function of portfolio value. Additionally, the speaker addresses the question of whether an equally weighted portfolio is reasonable, which depends on specific circumstances and can be viewed as an optimization problem with a penalty on the square of errors.
Prodipta Ghosh delves into the concept of risk and utility in portfolio management, highlighting the challenges in estimating expected returns and risks. He introduces modern portfolio theory and quadratic utility as approaches to maximize returns while minimizing risk. The speaker employs the example of the Saint Pittsburgh paradox to illustrate how human decision-making may deviate from mathematical averages.
The relationship between utility and risk is explained by Prodipta Ghosh, who emphasizes their significance in constructing a sound portfolio. He demonstrates the concept of risk premium, which quantifies the difference between the expected payout or return from a risky investment and the amount an individual is willing to accept for a certain payment. Additionally, he explains that a utility function is a mathematical representation of wealth that informs how much an extra dollar is valued, aiding in determining appropriate amounts to invest. Understanding the interplay between utility and risk enables investors to develop portfolios that strike a balance between risk and return.
The speaker discusses the notion of risk aversion in investment, which suggests that investors prefer certain investments over those with fluctuating returns. Risk aversion serves as a common assumption in quantitative portfolio management, with the risk premium represented by the Greek letter Pi. This premium denotes the amount an investor is willing to pay to accept a zero-mean fluctuating return. The speaker then explains the quadratic utility function and how it leads to the optimization of a portfolio's mean and variance. Building a portfolio based on Modern Portfolio Theory involves finding a balance between the mean and variance of the portfolio.
Prodipta Ghosh proceeds to explain the process of optimizing expected portfolio utility by striking a balance between the mean and variance. He utilizes Excel to simulate returns from different assets and calculates the covariance matrix, which is then utilized to determine portfolio returns, variance, and risk based on different weightings. By varying the weights and calculating the portfolio return and variance for all possible scenarios, an optimization problem can be solved. The resulting plot showcases the Sharpe ratio, which represents the ratio of return to risk, for each set of weights.
The concept of efficient frontiers in modern portfolio theory is then introduced by Prodipta Ghosh. He describes the efficient frontier as the range where a portfolio should lie in order to achieve maximum returns based on a given risk tolerance. He further explains that the addition of a low-risk asset, such as a risk-free asset, adds an interesting dimension to the concept. The highest Sharpe ratio is identified from the tangent portfolio, which is the portfolio formed by combining the risk-free asset with the efficient frontier. The line connecting zero to the tangent portfolio is referred to as the market line, and it presents a choice between investing in the market portfolio or opting for a risk-free asset while defining the allocation.
Prodipta Ghosh delves into the Capital Asset Pricing Model (CAPM), which changes the perspective of risk in finance by measuring it as a contribution to the market portfolio rather than standalone risk. CAPM captures the required rate of return for a risky asset, calculated as the risk-free rate plus a contribution to the market portfolio in terms of risk multiplied by the difference between the market return and the risk-free return. This concept provides a theoretical foundation for value investing. Through various models, such as discounted cash flow and compression models, investors can estimate a fair price using CAPM and capitalize on a better understanding of idiosyncratic risk.
The speaker discusses various portfolio management strategies, with a specific focus on factor investing. Factor investing involves considering multiple risk factors, beyond just market risk, when constructing a portfolio. Each factor carries a premium associated with it, leading to different investing styles, including factor allocation, factor timing, or a return to value investing and stock picking. Factor investing helps explain idiosyncratic risk and provides a new interpretation of alpha and beta, where alpha and beta become the total alpha if the delta F in the equation is time-invariant and positive.
Prodipta Ghosh highlights the major differences between value investing and factor investing and considers which approach makes more sense for retail traders. He notes that value investing requires extensive research on individual companies and often entails concentration in idiosyncratic risk, which may not be suitable for small-scale retail traders. On the other hand, factor investing involves researching the market drivers of risk and systematically leveraging them to allocate investments based on expected returns. The speaker briefly touches upon the distinctions between discretionary and quantitative research, stating that quantitative management can offer more opportunities for outperformance if utilized correctly.
The speaker compares value investors and quantitative strategists, noting that while value investors have a lower probability of success, they have the potential to generate substantial returns. Quant strategists, on the other hand, have a higher probability of success but generate relatively lower yet consistent returns. The fundamental law of investment describes the information ratio as the ratio of overperformance divided by the portfolio's risk, equating it to the information coefficient or skill level multiplied by the square root of n, where n represents the number of independent bets that can be made. Quantitative investors can have a higher number of n, allowing them to optimize a factor portfolio. Ghosh also elaborates on other optimization methods such as KD optimizations or risk parity optimizations, which aim to maximize terminal wealth over multiple periods by accumulating wealth.
Prodipta Ghosh moves on to discuss the Kelly portfolio strategy, emphasizing its dominance in the long run due to its focus on maximizing final wealth. However, he cautions that the Kelly strategy is also the most aggressive in terms of risk and may not be suitable for retirees or individuals who cannot afford short-term risks. He further explains the risk parity strategy, which aims to equalize individual risk contributions and ensures that the sum of all assets' risks remains balanced. While there is no theoretical justification for this approach, it is considered a sensible allocation of risk. When deciding between the Kelly strategy, risk parity, and mean-variance optimization, one must consider their risk appetite and the accuracy of their modeling, which can be enhanced through factor modeling. Ultimately, these strategies revolve around balancing risk and return, with a strong emphasis on measuring and managing risk effectively.
Prodipta Ghosh proceeds to discuss the topic of alpha strategies and how to combine them to create a well-rounded portfolio. While mean-variance optimizations can be employed for alpha strategies, they encounter an issue where all the allocation in the portfolio goes to a single, best strategy based solely on historical data. To address this concern, Ghosh introduces the concept of in-sample strategies, where all strategies are given an equal vote. Another approach is the regret switching portfolio, which employs change analysis techniques like hidden Markov models or change point analysis to allocate capital among different alpha strategies. One notable technique is the no regret approach, which addresses the exploration versus exploitation problem by systematically exploring each alpha strategy to identify the one with the most potential before heavily investing in it.
Prodipta Ghosh highlights that there are numerous resources available for further exploration of portfolio optimization, including platforms like Wikipedia and Contra's recently launched course on quantitative portfolio management. He mentions several opportunities for learning and growth in the industry through Contra's programs, such as their interactive self-paced learning portal and Blue Shift, which offers free backtesting. Ghosh expresses his gratitude to the audience for their participation and encourages them to visit Contra's website for additional information and resources.
Algorithmic Trading | Is It Right for You & How To Get Started
Algorithmic Trading | Is It Right for You & How To Get Started
Ladies and gentlemen, I would like to introduce Nathan, the co-founder of Elle Foam Advisory, who will be sharing valuable insights on the fascinating world of algorithmic trading. Nathan begins his presentation by defining algorithmic trading and highlighting its significance in the financial industry. He explains that algorithmic trading involves the use of computer algorithms to execute trades automatically, and it plays a crucial role in modern-day markets.
Nathan goes on to discuss the evolving nature of algorithmic trading and how its definition can vary based on geographical location and regulatory frameworks. In the United States, any form of systematic trading falls under the umbrella of algorithmic trading. However, in other regions, it is specifically considered algorithmic trading when computer algorithms autonomously determine order parameters. This distinction emphasizes the diverse approaches and perspectives within the field.
The speaker then proceeds to shed light on the current industry trends in algorithmic trading. He highlights the increasing prevalence of DIY (Do-It-Yourself) traders who utilize algorithmic strategies. Furthermore, Nathan presents data that demonstrates the significant market share growth of algorithmic trading in Asia, the United States, and India. Despite this growth, he acknowledges that retail participation in algorithmic trading remains relatively low and promises to explain this phenomenon in upcoming slides.
Moving forward, Nathan explores the impact of algorithmic trading on the job market. He explains how automation is replacing human traders, and firms are now seeking coders to develop sophisticated trading strategies and harness the power of machines. The speaker emphasizes four key advantages of machine trading over human trading: uptime, reaction time, scalability, and the ability to learn and improve. Machines can continuously monitor risks, execute trades promptly, adapt to market changes efficiently, and learn from their experiences more effectively than human traders.
Addressing the low retail participation in algorithmic trading, Nathan outlines several reasons for this discrepancy. Firstly, algorithmic trading requires a combination of technical knowledge, including coding and statistics, with a solid understanding of finance and market dynamics. Secondly, access to relevant market data is crucial for backtesting and developing robust strategies. Lastly, transitioning from manual trading to algorithmic trading can be challenging without guidance from experienced market practitioners who possess practical expertise in the field. Despite these obstacles, Nathan highlights the undeniable benefits of algorithmic trading, such as scalability, effective risk management, and the elimination of human error, making it an attractive option for traders.
Nathan then introduces the audience to the EPAct course offered by Point Density. He discusses the difficulty of finding a platform that provides comprehensive support for algorithmic trading, encompassing guidance from market practitioners, technical knowledge, and up-to-date content. The EPAct course aims to bridge this gap by offering rich content created by industry professionals that is continuously updated to reflect the latest trends. The course also provides dedicated support from the faculty and adopts a market-oriented approach, making it an ideal resource for both beginners venturing into algorithmic trading and those looking to advance their careers in this field.
Further elaborating on the course content, Nathan outlines the modules covered in the algorithmic trading program. The course begins with a primer module that establishes a foundation with basic statistics, probability theory, and the application of financial models. It then progresses to cover Python basics and advanced statistics, including Gaussian models used in understanding complex strategies. The course also includes sessions on resume building, setting up a personal trading desk, and conducting mock interviews for placements with over 100 partnered companies. Throughout the course, the instructor provides personal assistance to students, ensuring that any questions or difficulties are promptly addressed. Additionally, joining the EPAct course grants exclusive benefits, including access to community events and features, which will be further discussed in upcoming sections.
Continuing his presentation, Nathan dives into the details of each module within the algorithmic trading course. The course commences with the building blocks module, setting the foundation for understanding equity effects and future strategies. Students engage in hands-on exercises to create various trading strategies. The program then delves into market microstructure and implementations, exploring the intricacies of backtesting ideas on historical data using different APIs and brokers. Machine learning is also introduced as an emerging field within algorithmic trading. The importance of trading and front operations is emphasized, with a dedicated module focusing on setting up algorithmic trading infrastructure. The course also covers options trading, portfolio optimization, and risk management. Finally, students undertake a project and, upon successfully passing the exam, receive a verified certificate, validating their expertise in algorithmic trading.
Nathan then shifts the audience's attention to the Algorithmic Trading program offered by QuantInsti. He highlights that upon completion of the program, participants receive a verified impact certificate after completing a comprehensive 300+ hours course. The faculty includes renowned professionals in the industry who are approachable and provide hands-on experience in different asset classes and roles. The course covers various aspects ranging from CV preparation to providing access to APIs and broker networks for seamless implementation. Furthermore, the QuantInsti team assists participants with fundraising opportunities, making it an ideal choice for those seeking a comprehensive education in algorithmic trading.
Following Nathan's discussion, Nadine takes the stage to enlighten the audience about the benefits of being part of the EPAT community. She emphasizes the lifelong guidance available to community members, as well as the opportunity to connect with fellow students from over 165 countries. Exclusive events and sessions, free and subsidized access to brokers, and access to backtesting tools like BlueShift are among the privileges of the community. Furthermore, EPAT adds a fundamental quantitative dimension to an individual's existing skill set, enhancing their professional profile. Notably, the EPAT program is recognized under the financial training scheme, and working professionals in Singapore can benefit from a reimbursement of 2,000 Singaporean dollars.
Concluding the presentation, Ben Magnano shares his personal journey in algorithmic trading. He recounts his early struggles with day trading in 2005 until he found QuantInsti, where he received rigorous training in quantitative and algorithmic trading fundamentals. Ben highlights the importance of learning Python and being able to write his own programs, eventually earning his certificate as a quantitative trader. This achievement opened doors for him, leading to an opportunity as a research consultant at WorldQuant, where he continues to refine his coding skills and stay updated with the latest industry trends, such as artificial intelligence.
In the final moments of the video, the speaker acknowledges the tremendous growth in algorithmic trading and how it is increasingly preferred by traders who seek to minimize the need for constant monitoring. The speaker expresses gratitude for the exceptional analysis provided by the presenters, recognizing the valuable insights shared throughout the presentation. As the video concludes, the speaker summarizes the ePAD program, designed to equip participants with industry-ready skills in the quantitative and FinTech domain, ensuring they are well-prepared to thrive in the field of algorithmic trading.
Risk Models For Quant Trading By Zura Kakushadze - May 16, 2019
Risk Models For Quant Trading By Zura Kakushadze - May 16, 2019
Zura Kakushadze, in his discussion, focuses on the challenges associated with calculating the inverse of the covariance matrix for optimizing portfolios of 2,000 US stocks. He highlights that when the number of observations in the time series of returns is smaller than the number of stocks in the portfolio, the sample covariance matrix becomes singular and cannot be inverted. Even if it were non-singular, the off-diagonal elements representing correlations would be highly unstable out-of-sample unless there is a significantly greater number of observations compared to the stocks, which is typically not the case in real-life applications.
Kakushadze explains that risk models for quantitative trading strategies differ from traditional risk models due to shorter holding periods and ephemeral alphas. Long look-back periods are not desirable for these strategies, and alternative methods for calculating the covariance matrix are required. One common approach is to use a factor model that decomposes risk into factor risk and specific risk. The advantage of the factor model is that it represents the large covariance matrix by a much smaller factor covariance matrix, making it computationally efficient. However, Kakushadze points out that there are still intricate details that need to be addressed in the factor model.
The speaker further discusses the challenges associated with calculating volatility for each stock and suggests focusing on the sample correlation matrix rather than the sample covariance matrix. The sample correlation matrix is preferred due to issues such as singularity, instability, and other concerns associated with the covariance matrix. Kakushadze proposes factoring out skewed variances and using a factor model for the correlation matrix instead of the covariance matrix. The question of determining the risk factors arises, and two possibilities are suggested: using principal components of the sample correlation matrix or employing style factors such as size, momentum, and volatility.
Different types of risk factors suitable for quantitative trading are explored, including style factors and industry classifications. The speaker highlights the importance of using short horizon factors that are relevant for trading and excluding longer horizon factors. The risk of inadvertently neutralizing desirable alpha factors in the risk model is also discussed, emphasizing the need for careful selection and weighting of risk factors.
Kakushadze explains that standardized risk models purchased from vendors are incapable of removing undesirable risk factors or covering all the relevant directions of a trader's risk space. Therefore, the speaker suggests building a custom risk model from scratch. One approach is to use statistical risk models, which involve taking a time series of returns with a limited lookback period and creating factor loadings based on the principal components of the sample correlation matrix.
The concept of effective rank is introduced as a way to determine the number of principal components to use as risk factors. Effective rank measures the effective dimensionality of a matrix and can be calculated using spectral entropy. However, statistical risk models have limitations in terms of the number of risk factors, as it is constrained by the number of observations, resulting in limited coverage of the risk space. The instability of higher principal components out-of-sample is also a concern.
The instability of out-of-sample pairwise correlations and off-diagonal elements in the correlation matrix is discussed. Kakushadze explains that higher principal components calculated from an unstable correlation matrix are frequently updated and unstable, while the first principal component tends to be relatively stable. The speaker also delves into defining style factors suitable for shorter holding strategies and suggests dropping statistically insignificant correlations, such as shares outstanding, from intraday trading strategies.
Four common factors used in short horizon quant trading models are discussed: direction (momentum), volatility, liquidity, and price. Kakushadze explains how each factor is defined and how factor returns can be calculated using cross-sectional regression. The calculation of the annualized Sharpe ratio for each factor return is emphasized in determining their statistical relevance and suitability for trading strategies.
The speaker moves on to testing and verifying factor loadings and the effectiveness of style factors in risk modeling. Backtesting on intraday trades or shorter alpha trades on residuals after factoring out historical returns using the factor loadings is suggested as one way of testing factor loadings. The value of big sectors compared to style factors is highlighted, even at the least granular level. Constructing risk models based on industries or sub-industries using fundamental industry classifications is recommended as they cover a larger portion of the risk space. The stability of the first principal component out-of-sample affects the effectiveness of these risk models.
The construction of a factor loadings matrix for a large number of sub-industries is discussed, and hierarchical industry classifications are proposed as a solution. This approach involves modeling sub-industries first and then using the next granular level of industries to model the risk factors, continuing until the problem is reduced to a smaller matrix that can be properly calculated.
The process of reducing problems step-by-step to calculate risk models for quant trading is explained. By initially calculating a factor loadings matrix of a smaller size, such as 10 by 10, to the sample covariance matrix, Kakushadze constructs a one-factor model for the remaining factor, which is the market. This reduces the problem from a large matrix to a smaller one. Including style factors in this construction is suggested, but their contribution may be limited compared to a larger number of risk factors from various industries. Style factors may not be ideal proxies for modeling correlations between stocks.
The importance of including an intercept in the normalization process of style factors is explained. The speaker clarifies that the log of the price, typically used as a style factor, is actually the log of the price divided by a normalization factor. The normalization factor is empirical and can be customized based on the trader's preference. While industry-based factors tend to be reliable proxies for modeling correlations, bilinear combinations of style factors are considered poor proxies. Therefore, traders are advised to focus on industry-based factors and customize their models according to their trading style and quantitative trading alphas.
The speaker introduces the concept of heterosis, which combines powerful ideas such as factor models, industry classifications, and principal components into a construction that can be highly effective in risk modeling. Clustering techniques are also discussed as a way to construct risk factors using multi-level clustering schemes that can replace fundamental industry classifications. However, non-deterministic clustering algorithms may produce different clusterings each time they are run, leading to noise in the system. To reduce noise, a large number of clusterings can be averaged or other techniques like dimensionality reduction or principal component analysis can be employed.
Different approaches for clustering in quant trading risk models are explored. The speaker explains that while k-means clustering may be non-deterministic, deterministic alternatives such as hierarchical clustering can be subjective and slower. The speaker suggests using risk models themselves for aggregation instead of relying solely on clustering. In the case of k-means, the non-deterministic nature arises from the initialization of cluster centers, but finding the global minimum is not always necessary. To improve upon the naive approach of using historical returns, normalizing returns against historical volatilities is proposed.
Cluster normalization and multi-level clustering are discussed for quant trading. Clustering is recommended to be done by dividing returns by variance instead of normalizing returns with two standard deviations for optimizing portfolios and improving performance. Two approaches for multi-level clustering are presented: bottom-up, where the most granular level is created first, followed by clustering clusters successively, and top-down, where the least granular level is created first, followed by clustering tickers successively. Non-deterministic algorithms like hierarchical algorithms are not advantageous in terms of performance compared to deterministic algorithms, and the speaker suggests using clustering and aggregation techniques.
The speaker addresses the issue of determining the number of clusters in clustering-based risk models. Traditional methods such as the elbow method or silhouette analysis are mentioned, but they may not always provide reliable results. Instead, the speaker suggests using stability analysis, which involves creating multiple clustering solutions and measuring the stability of the resulting clusters. The stability can be assessed using techniques such as cluster-pair stability or bootstrap stability.
Kakushadze emphasizes the importance of stability in clustering-based risk models, as unstable clusters can lead to unreliable risk estimates. He suggests that stable clusters should be used for risk modeling, while unstable clusters should be discarded or combined with other clusters to improve stability. The speaker also mentions the use of machine learning techniques, such as hierarchical clustering using machine learning algorithms, as an alternative to traditional clustering methods.
The discussion then moves on to the construction of risk models based on the selected clusters. The speaker proposes using the sample correlation matrix within each cluster to estimate the factor loadings. By decomposing the sample correlation matrix of each cluster into its eigenvalues and eigenvectors, the factor loadings can be obtained. The factor loadings matrix for the entire portfolio can then be constructed by combining the factor loadings from each cluster.
The speaker highlights the importance of properly normalizing the factor loadings to ensure that they represent risk contributions. He suggests using the inverse of the eigenvalues as weights for the factor loadings to achieve risk parity. This ensures that each stock contributes equally to the overall portfolio risk. The risk model can be further enhanced by including additional factors such as style factors or industry-based factors.
Zura Kakushadze discusses the challenges and approaches in constructing risk models for quantitative trading strategies. He emphasizes the importance of addressing issues such as singularity and instability in the covariance matrix, as well as selecting appropriate risk factors and clustering techniques. By combining factor models, industry classifications, and clustering, traders can build custom risk models that effectively capture the risk characteristics of their portfolios.
Forex Trading for Beginners | Algorithmic Trading In FX Markets By Dr. Alexis Stenfors
Forex Trading for Beginners | Algorithmic Trading In FX Markets By Dr. Alexis Stenfors
Dr. Alexis Stenfors delves into a comprehensive analysis of the foreign exchange (FX) market, with a particular focus on liquidity and its significance. He begins by emphasizing the immense size of the FX market and its comparative scale in relation to the global stock market. Despite potential crises or natural disasters, liquidity in the FX market tends to remain robust.
Dr. Stenfors sheds light on the competitive nature of the professional FX market, noting its international scope. Trading a single currency pair in this market is not possible without simultaneously trading another currency pair. This characteristic distinguishes the FX market from the stock market, where buying stocks is more common and straightforward. Furthermore, central banks can intervene in the FX market by influencing the value of a currency through actions such as printing money or direct intervention, whereas such interventions are less common in the stock market. Additionally, the FX market operates without regulations, circuit breakers, and transparency, making it challenging to access reliable data for research purposes.
The core of liquidity in the FX market is explained by Dr. Stenfors, who highlights the importance of relationships and conventions between banks. Unlike traditional stock and equity markets, market makers in the FX market cannot quote prices or provide liquidity unless they know that another party is ready to reciprocate. In the FX swap market, competitors' bid-ask spreads tend to cluster around specific digits, and intriguingly, competitors often quote the exact same spreads rather than offering varied spreads.
Market conventions in the forex trading industry are discussed by Dr. Stenfors, focusing on price- and volume-based conventions. These conventions dictate appropriate trading behavior and facilitate strong relationships between banks and customers. Surveys indicate that only a small percentage of traders follow conventions primarily for profit-making purposes, while the majority perceive them as a means to foster relationships and maintain a positive market image. The rise of algorithmic trading has brought about changes in these conventions, with algorithmic trading accounting for over 70% of trading on platforms like EBS.
The implications of algorithmic trading for the forex market are debated by Dr. Stenfors. Proponents argue that high-frequency trading can enhance market efficiency, reduce transaction costs, and improve liquidity. However, skeptics contend that algorithms are ill-suited for adhering to conventions that were originally designed for human relationships. Traders using electronic platforms may face challenges when the market swiftly moves as they attempt to execute trades. Liquidity is now perceived as complex and difficult to ascertain. Despite differing viewpoints on algorithms, both sides agree that FX liquidity is undergoing changes that require closer examination. Dr. Stenfors presents data from a trading platform indicating an equal split between human and algorithmic trading in 2010.
Examining the volume and liquidity of the forex market, Dr. Stenfors focuses on the euro dollar currency pair as an example. He reveals that over three trading days, the total amount of limit orders for euro dollar was 1.8 trillion, with a narrow spread of only 0.08 percent. This indicates a highly liquid market with tight spreads. However, less than one percent of all limit orders actually resulted in transactions, and the median limit order lifetime was a mere 2.5 seconds. These findings suggest that while the market may appear liquid, its true liquidity might be less significant than it appears. Dr. Stenfors poses the question of whether liquidity can be swiftly accessed and conducts a test to determine if the market reacts promptly to attempted deals.
Dr. Stenfors shares his research on the impact of limit order submissions on liquidity in the FX market. Analyzing 1.4 million limit order submissions, he discovers that a new limit order immediately adds liquidity to the other side of the order book, benefiting high-frequency traders. However, liquidity disappears within 0.1 second, suggesting that algorithmic trading only contributes to short-term liquidity. Dr. Stenfors highlights a significant shift in the willingness to support liquidity in the FX market over the past decade, underscoring the importance of considering various aspects of liquidity, such as price-based liquidity, volume-based liquidity, community-based liquidity, and speed-based liquidity when analyzing the market.
The concept of different order types in forex trading and their ethical implications is explained by Dr. Stenfors. He elucidates that split orders are employed to divide large orders into smaller ones to prevent other traders from canceling their orders and to conceal information-rich orders. However, spoon orders, which create a false impression of the market state, are typically illegal in most markets. On the other hand, ping orders, aimed at extracting hidden market information, are less controversial but subject to interpretation. Dr. Stenfors also introduces his conservative definition of split orders, revealing that they accounted for 15-20% of euro dollar and dollar yen orders among the five currency pairs examined.
Dr. Stenfors delves into the use of split orders and their aggressiveness in the FX market. Contrary to popular belief, large orders often exhibit high aggression, and split orders serve not only to mask larger amounts but also to enable algorithmic traders to submit more aggressive orders. However, the market response to split orders is much more pronounced compared to typical human orders, and algorithms quickly adapt to this strategy, making split orders less effective. The discussion also touches upon spoofing and pinging, indicating that major currency pairs like euro dollar and dollar yen are highly sensitive to information, making them susceptible to spoofing, while pinging is used to extract hidden information by testing the market with orders and observing any reactions.
Dr. Stenfors presents a proxy he developed to analyze the prevalence of "pinging" in various FX markets. A ping order is canceled before any market change occurs, making it a potential indicator of pinging activity. Using a comprehensive database, Dr. Stenfors estimates that around 10% of orders in the Euro Dollar and Yellow Markets may be potential ping orders. However, in markets like Euro Swedish and Dollar Ruble, this percentage increases significantly, reaching as high as 50% and 80% respectively. Notably, pinging appears to be more prominent in less traded markets on the platform. Dr. Stenfors suggests that studying liquidity requires consideration of diverse strategies and order lifetimes, as the market-making function, particularly in the FX pop market, is increasingly being carried out by algorithms.
Dr. Stenfors discusses the evolving nature of liquidity in the forex market and emphasizes the need for a broader range of metrics to assess it. He underscores the impact of barriers in order strategies, such as split-offs, spoofing, and pinging. While these issues have been extensively studied in equity markets, their effects on forex liquidity can be significantly different, despite the larger size of the forex market. Dr. Stenfors recommends that traders remain aware of these complexities regardless of their order submission methods and provides additional resources for those interested in further exploration.
Dr. Alexis Stenfors offers a detailed analysis of the forex market, specifically focusing on liquidity and its various dimensions. His research highlights the unique characteristics of the forex market, including its size, competitive nature, and international scope. He emphasizes the importance of market conventions, the implications of algorithmic trading, and the impact of different order types on liquidity. Through his studies, Dr. Stenfors reveals the complexities and evolving nature of forex liquidity, underscoring the need for comprehensive assessment and understanding in this dynamic market.
Develop And Backtest Your Trading Strategies | Full Tutorial
Develop And Backtest Your Trading Strategies | Full Tutorial
The video begins by introducing an experienced quant who will provide guidance on developing and executing trading strategies using Blueshift, a cloud-based platform. Blueshift offers comprehensive data sets, including US and Indian equity markets, as well as detailed Forex data. The session covers systematic strategies, a primer on Python, an introduction to Blueshift, creating reusable templates for backtesting, technical indicators, constructing a simple strategy using a single indicator, and managing portfolio strategies. Importantly, the session does not offer trade recommendations or claim to provide foolproof strategies.
The speaker highlights the different approaches to trading styles, such as fundamental, technical, and quant, and how they treat trends, mean reversion, breakouts, and carry in unique ways. Designing a systematic trading strategy involves selecting securities, generating buy and sell signals, computing target portfolios, executing trades, and continuously improving the process. The speaker explains the inputs required for systematic strategies, including price data and its transformations, fundamental and non-market information, and trading rules/logic. These rules can be developed based on a trader's hypothesis or through data-driven techniques like machine learning and artificial intelligence.
The speaker emphasizes the importance of testing trading strategies through backtesting and forward testing. Backtesting helps traders verify the validity of their hypotheses, while forward testing guards against biases and pitfalls like data mining biases, survivorship biases, market impact modeling, and look-ahead biases. A flexible backtesting platform is essential for adjusting and modifying strategies, and risk management and portfolio creation are crucial as not all strategies perform well in every market. The speaker provides a brief introduction to using Python-based code in the Blueshift platform for strategy creation and testing.
The video explains the four essential functions required for backtesting trading strategies on Blueshift. These functions are "initialize," which sets up initial parameters, "before_trading_start," called before each trading session, "handle_data," executed at each new price bar arrival, and "analyze," used for strategy analysis. The speaker demonstrates the order in which these functions are called and how traders can position their code within each function. The section concludes with a basic introduction to using Python in the Blueshift platform.
For viewers unfamiliar with Python, the video offers a primer on Python basics. It covers variables, strings, integers, floats, and data structures like dictionaries and lists. The creation of functions and classes in Python is also introduced. The video then delves into the Blueshift workflow, explaining the "initialize," "before_trading_start," "handle_data," and "analyze" steps. The usefulness of scheduled and ordering functions is highlighted.
The presenter discusses the three primary ordering functions in Blueshift. The first function, "order_percent_target," allows traders to take positions in underlying assets based on the target portfolio's weight. The second function, "get_open_orders," provides the number of pending orders, and the third function, "cancel_order," allows cancellation of orders. The presenter emphasizes the importance of controlling the trading environment and demonstrates functions like "set_commission," "set_slippage," and "set_account_currency." The "context" and "data" objects in Blueshift are explained, showcasing their role in capturing algorithm state and accessing data. An example illustrates accessing the portfolio and data for a simple buy-and-hold strategy using the "history" function. The concept of scheduling using the "schedule" function is introduced, allowing users to define when specific functions should be called.
The tutorial focuses on creating a template to streamline strategy development and avoid repetitive code. Technical indicator libraries like TLE and standard libraries like Pandas and Numpy are imported. The universe of securities is narrowed down to major indices, and the "context" variable is initialized as a dictionary to store strategy parameters. These parameters include indicator look back, buy/sell thresholds, moving average periods, RSI, B-bands, ATR, and trade frequency. This template aims to minimize boilerplate code and standardize parameters for easy modifications.
The speaker introduces a variable to control trading and create a portfolio with weights for each instrument in the universe. They set commission and slippage to zero for demonstration purposes. The "handle_data" function is defined to execute trading every 15 minutes. The "run_strategy" function becomes the main function for running the strategy. It retrieves past prices and computes weights before rebalancing using the "context.universe.prices" function. The "rebalance" function iterates through all securities in the universe and places orders to achieve target weights. An anonymous function is defined to print the context portfolio and weights, and a "advisor" class is created to compute the weight object.
The speaker explains how to define inputs for the "advisor" class, including the name and signal function, and how to pass the stock selection universe. They cover initialization and storing the advisor's performance, as well as defining the main function that calls the signal function to generate buy/sell signals. The speaker emphasizes defining the signal function based on technical indicators, often expressed as weighted functions of past prices. They recommend referring to theoretical papers from experts like Cliff Asness of AQR Capital Management.
Technical indicators and their correlation to the market are discussed based on statistical analysis using principal component analysis. Technical indicators act as filters on past prices or returns, capturing long or short-term trends by filtering high or low-frequency data. However, technical indicators can be self-fulfilling prophecies and are susceptible to certain types of trading algorithms that can lead to momentum or stop-loss hunting. It's important to have a portfolio of different indicators when developing and backtesting trading strategies.
The instructor explains importing the technical analysis library and lists available technical indicators. Using the example of Bollinger Bands, the instructor demonstrates the function "Bbands" to retrieve the last row's value. Other functions like RSI, MACD, Fibonacci support, resistance, etc., are also showcased. The instructor explains the "get_price" function and the "handle_data" function, which checks if it's time to trade for each period. The "run_strategy" function looks for suitable arguments using the "advisor_compute_signal_price" function, followed by the "rebalance" function to place orders for target percentages. Finally, the "analyze" function is used for strategy analysis.
The speaker focuses on managing strategy portfolios to enhance algorithmic trading profits. Instead of relying on a single strategy, running multiple strategies simultaneously or in different periods is recommended. Four methods for managing strategy portfolios are discussed: creating a committee, using a regime switching model, dynamic allocation, and factor-based investing. Averaging can improve signal stability. The strategy's code involves adding an agent responsible for selecting advisors and allocating capital. The agent uses a weighing function to update advisor weights, which affect the rebalance function.
The speaker explains how to define and weigh portfolios based on the number of advisors, with equal allocation for each. They demonstrate creating separate expert advisors and an agent to allocate capital among them. A backtest using QuickBacktest shows significantly improved performance compared to individual cases. The speaker emphasizes the importance of drawdown in a trading strategy and suggests looking at the Sortino ratio and the stability of the profit and loss curve. The equal weighted average input portfolio significantly improves performance, but there is room for further improvement.
The speaker introduces the concept of "no-regret trading," which involves determining the best-performing investment strategy in a difficult-to-predict market. Rather than relying on a single investment, the strategy involves varying the weights of each investment. The speaker recommends using the exponential gradient algorithm to determine weights, adjusting them based on the portfolio's response to market scenarios. The Kelly criterion is also suggested for capital allocation, maximizing return versus variance based on geometric Brownian motion.
The speaker explains the output of weights and how they differ for different advisors. They test a random signal that ideally receives less allocation compared to other signals if it is genuinely random. The speaker discusses the agent function, which takes a list of advisors and a learning rate parameter, and computes the weight function. It iterates through the advisors list, computes the advisor signal, aggregates them sector-wise, and updates the context weights based on the computed weight. The section concludes with guidelines on strategy development, including avoiding overfitting, checking account leverage, and providing a list of demo strategies for viewers to explore.
The speaker discusses different methods of forward testing, such as paper trading or trading with a small amount of capital in live markets. They mention that BlueShift currently does not support PI torch or Jupiter Notebook but plans to support Keras and TensorFlow. The platform is not limited to Indian markets and can access US and Indian equity data as well as FX data. The speaker notes that BlueShift does not have built-in debugging tools at the moment but considers adding them in the future.
The speaker talks about option backtesting and mentions that most platforms offering it are unreliable or require extensive data cleaning and arrangement. They also note that Indian Gravitons only support liquid futures and do not allow third-party data feeds. The recommended minimum backtesting time period depends on trading frequency, and although one-minute data for Indian markets is available, optimization runs are not efficient due to technology limitations. BlueShift does not have any fees, and there are no restrictions on the number of simultaneous backtests, as long as the website traffic can handle them. Backtesting for PSA and using Python packages is possible, but there is a restricted list of available packages for security reasons.
The speaker explains that backtesting is a crucial step in developing and evaluating trading strategies. It helps determine if a strategy is viable and profitable before deploying it in live markets. They highlight the importance of considering transaction costs, slippage, and other real-world factors when backtesting to ensure realistic results.
The speaker introduces the BlueShift platform, which provides an environment for backtesting and deploying trading strategies. BlueShift supports backtesting on Indian equity, US equity, and forex markets. Users can write and test their strategies using Python and leverage various built-in functions and libraries. The platform also allows users to paper trade their strategies or trade with real capital, depending on their preferences.
The speaker emphasizes the significance of forward testing, which involves deploying a strategy with a small amount of capital in live markets. This helps validate the strategy's performance and behavior in real-time conditions. They mention that BlueShift currently supports forward testing for Indian markets, and users can paper trade with a virtual capital of up to 1 crore (10 million) Indian Rupees.
Option backtesting is also discussed, with the speaker mentioning that many existing platforms for option backtesting are unreliable or require extensive data cleaning and preparation. They note that BlueShift does not currently support option backtesting but may consider adding it in the future.
Regarding data availability, the speaker mentions that BlueShift provides historical data for Indian equity, US equity, and forex markets. However, they note that optimizing strategies with one-minute data for Indian markets may not be efficient due to technological limitations.
The speaker clarifies that BlueShift does not have any fees for backtesting or using the platform. Users can conduct as many backtests as they want, as long as the website traffic can handle the load. They also mention that BlueShift has a restricted list of available Python packages for security reasons but users can still leverage popular packages like pandas and numpy.
The speaker highlights the importance of thorough backtesting and forward testing in strategy development. They encourage users to leverage the BlueShift platform for backtesting and deploying their trading strategies, while keeping in mind the limitations and considerations discussed during the presentation.
Forex Trading Strategies | Develop and Backtest Trading Ideas | Full FX Tutorial
Forex Trading Strategies | Develop and Backtest Trading Ideas | Full FX Tutorial
During this informative webinar, the speaker provides a comprehensive overview of Quantiacs BlueShift, a powerful strategy development platform for systematic trading strategy research and backtesting. The platform offers a range of features and functionalities that make it an ideal tool for traders.
BlueShift is a cloud-based platform, which means users can access it from anywhere, allowing them to develop and analyze strategies on the go. It provides users with inbuilt financial datasets, making it convenient to access relevant market data for strategy development.
While the webinar primarily focuses on the foreign exchange (FX) market, the BlueShift platform also supports equity and futures trading across various markets. It emphasizes that the intellectual property of the backtesting strategies developed on the platform belongs entirely to the user, ensuring confidentiality and ownership.
The speaker delves into the nature of the foreign exchange market, highlighting its status as the largest decentralized market with a staggering daily trading volume of approximately 5 trillion dollars. Within this volume, around 300 billion dollars can be attributed to retail trading. The speaker discusses several factors that differentiate the FX market from the equity market, such as higher leverage, easier shorting opportunities, and relatively lower volatility.
To understand what drives the forex market, the speaker points out the significance of macroeconomic factors such as balance of payments, interest rates, inflation, economic growth, and fiscal policies. They also mention that corporate and hedging flows, as well as sudden political and geopolitical changes, can have a considerable impact on the market. However, it's important to note that there is no standard or widely accepted methodology for valuing the forex market. The speaker briefly mentions methods such as purchasing power parity and real effective exchange rate, with more advanced techniques preferred by large institutions and the International Monetary Fund (IMF). Additionally, the speaker emphasizes the importance of short-term funding markets in driving liquidity and determining overnight rollover costs.
When it comes to developing and backtesting forex trading strategies, the speaker introduces various approaches. Economic models, such as the monetary model and the behavioral equilibrium exchange rate model, use econometric methods to analyze data. Data-driven models, including time series forecasting, non-linear time series, and neural networks, are also discussed as viable options for short-duration forex trading. The BlueShift platform is presented as a user-friendly interface that facilitates strategy development and testing. Users can input datasets, starting capital, and metadata descriptions, among other details. The platform provides tools for full backtesting as well as running quick backtests. Built on Python's Zipline API, BlueShift offers a standard strategy template for users to begin their development process.
The speaker elaborates on the basic structure of forex trading strategies and the key functions required for backtesting. They explain the "initialize" function, which sets up Baptist parameters and accounting parameters. The "before trading start" function is called once per day at the start of the trading session, followed by the "handle data" function, which is called every minute for the mini dataset. Finally, the "strategy" function is scheduled using the API for a specific time and date, and the rules are defined by the user. After running a quick backtest, users can access the Baptist tab to view different sets of data, including the equity curve, tear sheets, and other statistics.
The tear sheet, explained by the speaker, provides a set of reports for analyzing trading strategies. It includes parameters such as the maximum Omega ratio, Sortino ratio, skewness, kurtosis, stability of the time series, and more. The speaker demonstrates the workflow using BlueShift, which involves initializing, going through "before trading start" and "handle data," and utilizing various API functions such as scheduling, setting commissions, setting slippage, and setting account currency. The speaker mentions the availability of a standard template for forex trading strategies.
The speaker mentions the availability of a standard template for forex trading strategies in the BlueShift platform. This template provides a starting point for users to develop their strategies by defining their entry and exit rules, risk management parameters, and other customization options.
The BlueShift platform also offers a wide range of built-in technical indicators, including moving averages, oscillators, and trend-following indicators, which can be used to build trading rules and signals. Users can combine these indicators with their own custom logic to create unique and personalized strategies.
To validate and evaluate the performance of a trading strategy, the speaker emphasizes the importance of conducting rigorous backtesting. BlueShift allows users to backtest their strategies using historical data to simulate real-world trading scenarios. The platform provides comprehensive performance metrics, including profitability, drawdown analysis, risk-adjusted returns, and various ratios like Sharpe ratio, Sortino ratio, and Calmar ratio.
Once a strategy has been backtested and validated, the speaker suggests the next step is to deploy it in a live trading environment. BlueShift provides integration with multiple brokerages, allowing users to execute their strategies directly from the platform. This seamless integration ensures a smooth transition from strategy development to live trading.
The speaker concludes the webinar by highlighting the benefits of using BlueShift for forex strategy development and backtesting. The platform offers a user-friendly interface, access to diverse financial datasets, and a comprehensive set of tools and indicators. It empowers traders to develop, test, and deploy their forex trading strategies with ease and efficiency.
The webinar provides a detailed overview of the BlueShift platform, its capabilities, and its application in forex trading strategy development. It offers valuable insights into the forex market, different modeling approaches, and the importance of robust backtesting. Traders looking to enhance their forex trading strategies may find BlueShift to be a valuable tool in their arsenal.
strategies are always better than one. The speaker also mentions different methods for risk capital allocation such as LE criteria, equal-weighted, and momentum-weighted strategies. Additionally, he provides an example strategy using the Bollinger Bands technical indicator and shows the impressive statistics of the backtest results. He concludes by highlighting the importance of measuring the stability of the strategy's return over time to ensure consistency and avoid overfitting.
How EPAT Can Help You! by Nitesh Khandelwal - June 28, 2018
How EPAT Can Help You! by Nitesh Khandelwal - June 28, 2018
Nitesh Khandelwal, the speaker, introduces himself and his company, ConTeSt, as a provider of algorithmic and quantitative trading education for the past eight years. He begins by sharing his personal background, starting from his engineering days to his experience in the banking industry. He then highlights the launch of the Executed Program Algorithmic Trading (EPAT), a six-month program that offers consulting, training, and a smooth transition towards trading in the high-frequency trading (HFT) domain. Khandelwal mentions his experience in Singapore, where he set up tests for exchanges worldwide and expanded the business on a global scale.
Moving on, Khandelwal discusses algorithmic trading and its growth in comparison to DIY (do-it-yourself) trading. He shares statistics indicating the significant rise of algorithmic trading in Asia, Europe, and the US, highlighting how traders now prefer making their own trading decisions rather than relying on brokers. However, he notes that while algorithmic trading constitutes a significant portion of market activity in India, retail participation remains relatively low. Khandelwal references an article from Bloomberg that explores the increasing role of robots in replacing finance jobs.
Khandelwal goes on to explain why retail traders have been unable to adopt algorithmic trading and suggests ways to ensure it becomes an enabler rather than a threat. He emphasizes the need for statistical and technical knowledge, access to quality market data and efficient brokers, and guidance from practitioners when transitioning to automation. He explains how EPAT was created to address these needs and provide guidance to individuals interested in algo trading or automating their strategies.
Next, Khandelwal discusses the features of EPAT. He mentions that the program offers rich content created by practitioners, domain experts, and leading fund managers. The curriculum is continuously updated to align with market requirements, and lifelong access to updated content is provided. EPAT includes a dedicated support team to resolve queries, faculty guidance for alumni, and a career cell that assists in job opportunities, setting up trading desks, finding relevant brokers and data vendors, and more. Additionally, EPAT participants gain access to exclusive features available only to them.
Khandelwal highlights the importance of the primer module in EPAT, which ensures that all participants start the course on the same page. The primer module covers the basics of Excel, Python, statistics, and financial markets, which are fundamental building blocks of algorithmic trading. He explains how the primer module evolves over time to provide maximum value extraction from the program. Furthermore, Khandelwal discusses the relevance of Python as the most widely used programming language in algorithmic and pawn trading, leading to its inclusion in the EPAT program.
The speaker then delves into the different modules covered in EPAT and how they are approached. The program covers data analysis and modeling in Python, advanced statistical methodologies, equity effects and futures strategies, and machine learning for trading. Khandelwal emphasizes the importance of understanding the infrastructure and operations behind trading strategies, as well as options trading strategies, portfolio optimization, and operational risk in algorithmic trading. He also highlights the significance of completing a project under the mentorship of a domain expert and taking the EPAT exam to obtain a verified certificate.
Khandelwal provides an overview of the EPAT certificate program, which spans over six months and includes over 100 hours of classroom connect, hands-on experience, and over 300 hours of coursework. He mentions the distinguished faculty members who teach the program, including practitioners, academics, and successful traders. The program offers placement opportunities and assists participants in CV and interview preparation, skill gap identification, and access to placement partners such as brokers and investment banks. EPAT participants also gain access to privileged brokerage data and API providers, as well as advanced backtesting tools like the Contra Blue simulator.
Furthermore, Khandelwal discusses the benefits of EPAT and how it adds value to participants. He mentions access to minute-level data for Indian markets and S&P 500 stocks, continued learning opportunities, career assistance, and alumni reunions. He emphasizes that EPAT goes beyond just a certificate and provides a fundamental quantitative dimension to existing skill sets. Khandelwal clarifies that EPAT focuses on teaching participants how to create and validate trading strategies rather than providing ready-made working strategies. He acknowledges that the success ratio of strategies varies depending on factors such as infrastructure access, risk management, and risk appetite.
Khandelwal addresses a question about whether technical analysts can automate their trading using strategies like MACD crossovers, moving averages, and RSI after studying EPAT. He confirms that the program covers these strategies, ensuring participants have the knowledge and tools to automate their trading.
The speaker then moves on to discuss the investments required to start one's own algorithmic trading desk and explains that the tax for analysts depends on the frequency of the desk. He mentions that EPAT primarily focuses on low and medium-frequency trading but also covers aspects of high-frequency strategies. The program combines Python, Excel, R, and MATLAB and requires programming skills and conceptual clarity. EPAT provides guidance for students to set up their own trading desks. While EPAT does not guarantee job placements, they offer guidance to alumni who seek it.
Khandelwal clarifies that while EPAT does not provide placement guarantees, they do offer counseling to ensure candidates have a basic understanding of algorithmic trading before enrolling in the program. He highlights the success of many actively seeking EPAT students in landing jobs or making career changes due to the program's extensive network of placement partners. He mentions that EPAT's learning management system provides lifetime access to all sessions and updated content, and the course requires a time commitment of approximately 300 hours, which can be spread out over three months by dedicating an hour daily. Khandelwal emphasizes that EPAT's focus on practical implementation sets it apart from more theoretical courses.
Khandelwal discusses the fee structure for the EPAT course, which is $4,720 for developed markets and INR 189,000 plus GST for India. He also mentions the need for brokers and APIs to code strategies and explains that participants can expect career assistance in Hong Kong, although the EPAT team has had more success in India and Singapore. He advises that while the EPAT modules are interdependent and should be taken as a whole, one to two hours of daily effort should be sufficient for those with limited trading knowledge. He concludes by mentioning that the EPAT course covers all types of trading strategy paradigms and offers remote work opportunities for participants and alumni.
In the closing remarks, the speaker highlights that the EPAT program is comprehensive and provides complete access to all modules, making it valuable for individuals with a technology background looking to enter the algorithmic trading field. They mention the various job opportunities available in the domain, with many cases of EPAT participants starting their own ventures or securing jobs with prominent firms after completing the program. The speaker emphasizes the importance of understanding basic statistics, correlation, and regression to succeed in this field. Lastly, they emphasize that automated trading strategies do generate profit and account for nearly 50% of overall volumes in India, indicating the significant potential for those interested in algorithmic trading.
AMA on Algorithmic Trading | By Nitesh Khandelwal
AMA on Algorithmic Trading | By Nitesh Khandelwal
In this "ask me anything" session on algorithmic trading, Nitesh Khandelwal, co-founder of algo trading firm Eragy, welcomes the audience and shares his expertise on the topic. The session aims to cover various aspects of algorithmic trading, including platforms and brokers, trading strategies, market data, job opportunities, setting up an algo trading desk, regulations, the future of algo trading, and learning and education opportunities. Khandelwal mentions that the session will strike a balance between pre-prepared questions and live questions, and they also offer individual follow-up sessions for unanswered queries.
The presenter begins by explaining different trading strategies such as low frequency, medium frequency, and high frequency trading. These strategies are defined based on the latency of the trading infrastructure and order processing time. The focus is on emphasizing that the latency of the trading strategy is more important than the number of trades executed per second. The section then delves into where to obtain market data and economic data, discussing different data vendors such as Yahoo Finance, Google Finance, Quandl, Alpha Vantage, and FXCM. These vendors offer either downloadable data or data that can be used on their platforms.
Moving on, the speaker discusses the sources of data for algorithmic trading, including manual downloads, API fetching, and paid vendors such as Quandl, Global Data Feed, Trading Economics, Thomson Reuters, and Active Financial. They also address the question of whether high-frequency traders (HFT) generally outperform manual day traders, explaining that it depends on the type of day traders being analyzed. If traders are taking advantage of arbitrage opportunities or market inefficiencies, machines may be faster than manual traders. However, if traders are analyzing data and executing manual orders after thorough research, machines are not necessarily more efficient. The speaker dismisses the idea that an excessively algo-traded market is counterproductive, clarifying that automation does not always require high-frequency trading.
The concept of using algorithms in trading, known as "elbows," is explained. It involves trading with more efficiency and can be automated and quantified using mathematical formulas. However, finding market inefficiencies can be challenging, and competition in high-frequency trading and technology infrastructure is becoming more expensive. The speaker also addresses the question of how to handle multiple strategies in a brokerage account for an FBI.
The prerequisites for algorithmic trading are discussed, involving knowledge of statistics and econometrics, financial computing, and quant trading. The presenter mentions that those starting from scratch can learn about these pillars through freely available resources on the Quant website. For traders already familiar with trading strategies and looking to automate, they can begin by using a broker API and eventually build their own platform. The speaker also explains the various data providers for tick data and mentions that while most vendors provide snapshot data, higher-end vendors can provide true tick data at a higher cost. Lastly, it is noted that for traders who are already successful with their current trading strategies, learning algo trading may not be necessary unless they want to keep upgrading and experimenting.
The benefits of automating trading strategies are discussed, including controlling emotions, scalability, and bandwidth to work on strategies while machines handle execution. The speaker emphasizes the importance of having a programming background for success in algorithmic trading and highlights that Python is widely used by most firms globally. However, the speaker advises that high-frequency trading is not suitable for retail traders, and some strategies may require a reasonable amount of capital before seeing success. Nonetheless, even with basic knowledge of Python, one can get started in algorithmic trading.
The skills required to become an algorithmic trader are discussed, including knowledge of statistics, econometrics, and trading strategies. The speaker also explains the various career opportunities in algorithmic trading, ranging from back-office roles to front-office trading roles. They mention that individuals with a software and data science background can venture into algo trading, as their background already provides a strong foundation and picking up the financial market side should be relatively easier. The speaker also mentions a blog about a 40-year-old alumnus of QuantInsti who successfully transitioned into algorithmic trading without prior trading experience. QuantInsti is highlighted as an institution that offers a dedicated career cell to help individuals acquire the necessary skills and connect with the right people to advance in their careers.
The speaker proceeds to discuss algorithmic trading languages and their significance in research and analysis. While high-frequency trading firms prefer using C++ for lower latency, for backtesting and strategy evaluation, R and Python are more popular choices. In response to a user's question about improving hit ratio and managing back-to-back losses, the speaker suggests optimizing parameters in backtesting and utilizing in-sample and out-of-sample trading to check for drawdown. Market saturation is also addressed, with the speaker stating that the HFT ratio serves as an indicator of competition and that plain vanilla arbitrage strategies may not be successful in highly saturated markets.
Different algorithmic trading strategies are further explored, highlighting the need for a strong technology infrastructure for plain vanilla arbitrage and market making strategies. The speaker engages with various audience questions, including the meaning of total bit quantity, the impact of HFTs on traditional traders in India, and the time horizon used to analyze data for algo trading. They explain that the time horizon depends on the trading frequency. Additionally, the speaker encourages individuals with a software and data science background to venture into algo trading, stating that their background already provides a strong foundation, and picking up on the financial market side should be relatively easier.
Nitesh Khandelwal addresses several questions related to the possibility of setting up a trading platform with their company, legal approval for automation, costs, and Indian market regulations. They clarify that their company provides guidance and lifelong support to participants and alumni but does not offer consulting services. Automation is possible, and the costs depend on the required infrastructure. In countries like India, each trading strategy needs approval before automation, and only the broker can do that on behalf of the trader. The usage of stochastic and fundamental indicators in strategies is discussed, mentioning that they can be used manually or through software. The speaker also mentions the availability of tools for reading machine-readable news and economic data to create algorithms.
The session delves into whether people in India can engage in high-frequency trading (HFT) for non-Indian markets and whether HFT drives retail traders away from the markets. Regarding non-Indian markets, it is explained that sending money for trading margin products listed on foreign exchanges is not permitted under the LRS scheme unless one has RBA approval. However, if a global company outsources some of its trading to an Indian company, then it could be possible. Regarding the impact of HFT on retail traders, it is mentioned that the presence of HFTs adds liquidity to the market and tightens spreads, which benefits retail traders. However, illegal activities like front running should not be permitted, irrespective of the domain.
The speaker emphasizes that high-frequency trading (HFT) does not harm individual retail traders, as they typically use web-based browsers that inherently have a built-in latency of a few hundred milliseconds. Even if HFT firms use illegal methods to gain faster access, it would not impact the retail trader but harm other HFT firms that follow the rules. The speaker emphasizes that retail traders generally benefit from the efficient market created by HFT, as it eliminates arbitrage opportunities. The speaker also addresses a question about learning algorithmic trading in English and discusses a few important components for consistently profitable trading.
The video underscores the importance of continuously evolving trading strategies in the algorithmic trading industry, as markets constantly change. While not many brokers in India support algorithmic trading, some do offer programmatic trading options such as semi-algo or el-go. The speaker also discusses the job market for quant analysts, highlighting that it is not exclusive to PhDs but rather depends on individuals' knowledge and problem-solving skills. The hardware and infrastructure requirements for algorithmic trading are addressed as well. For low-frequency trading, a decent laptop or cloud computing options provided by companies like Amazon and Google are sufficient. Medium-frequency trading requires an algorithmic trading platform and a specialized server, which can cost a few thousand dollars. High-frequency trading demands a specialized server ranging from $10,000 to $25,000.
The speaker explains the approvals required before going live, which depend on the exchange and location. They clarify that the EPAT program covers a comprehensive range of topics and focuses on practical learning, although it does not guarantee profitable strategies. The different types of algorithms used in automated trading are discussed, including low, medium, and high-frequency algorithms. High-frequency algorithms are utilized for arbitrage, market making, and directional strategies that require faster computing. Low and medium-frequency algorithms can automate various strategies, including fundamental investing. Popular strategies like momentum, statistical arbitrage, and option-based strategies are also mentioned, with algorithms providing benefits such as scalability, emotional control, and better analysis of big data.
For retail traders interested in algorithmic trading but lacking programming experience, the speaker suggests starting with learning basic statistics and trading strategies. They provide resources for self-paced learning. Nitesh Khandelwal emphasizes the idea of creating one's own trading strategy rather than relying on pre-existing ones. They also touch upon the role of algo trading in the cryptocurrency market, stating that while some participants use automation tools for trading cryptocurrencies, algo trading is not the sole reason behind the cryptocurrency boom. The potential impact of artificial intelligence and machine learning on algo trading is mentioned, with the speaker highlighting that it will empower individual and retail traders alongside big institutions due to the affordability of computing power required for training algorithms.
The speaker further discusses the expected increase in retail participation in algorithmic trading due to the changes and automation happening in the financial sector. They address questions from the audience about resources for balance sheet data, transitioning from a non-finance firm to an algorithmic trader, and the ideal numbers for CAGR (Compound Annual Growth Rate) and winning ratio in algorithmic trading. The speaker cautions against solely focusing on percentage returns and instead emphasizes scalability, strong infrastructure, and technology as important considerations.
The session concludes with the speaker discussing the importance of considering risk when discussing returns and the investment required to start an algo trading business, which can range from a few thousand dollars to hundreds of thousands depending on the frequency and type of infrastructure needed. The speaker mentions that automation and risk management are key factors to consider when starting an algo trading business. They also provide insights into real-time data availability in India and the approval process for trading strategies, emphasizing that exchanges prioritize risk management over the specifics of the strategy. Finally, the speaker acknowledges the scarcity of good websites for back-testing and writing lefty (leveraged and intraday) strategies in Indian markets.
In the last segment, the speaker discusses the development of tools for different markets at Horn Insights, aiming to provide better exposure and benefits to participants and users. They address a question about the salary range for quants in India, noting that it depends on factors such as experience and background. The speaker emphasizes that colocation is not manipulation and compares it to paying for air travel to reach a destination faster compared to traveling by train. They also mention that most technical indicator-based strategies can be developed using Python and highlight that while advanced programs in the algorithmic trading domain are not widely available, lifelong guidance is provided through the ANNIE pat program.
In the final moments of the video, the speaker encourages individuals to pursue algorithmic trading and mentions that the market has evolved significantly over the years, becoming more accessible to retail traders. They invite viewers to explore the resources available at QuantInsti and Horn Insights to further their knowledge and understanding of algorithmic trading.