
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Build your own algos with ADL® by Trading Technologies
Build your own algos with ADL® by Trading Technologies
Andrew Reynolds, the product manager for automated trading tools at Trading Technologies, introduces ADL (Algo Design Lab) as a groundbreaking solution for simplifying the development process of trading algorithms. Prior to ADL, traders interested in creating their own algorithms had to learn coding, which was time-consuming and had a lengthy development cycle. However, ADL revolutionizes the process by providing an intuitive graphical tool that allows traders to design and deploy algorithms without writing a single line of code. This significantly lowers the barrier of entry in terms of technical ability and enables traders to swiftly capitalize on market opportunities. ADL ensures optimal performance by converting the designed algorithms into well-tested code that runs on co-located high-performance servers.
Reynolds proceeds to explain the key features and functionalities of ADL. The ADL canvas serves as the workspace, consisting of a wide array of blocks representing different trading concepts and operations. Traders can easily drag and drop these blocks to create algorithms, and each block has specific properties and can be connected to other blocks to define the desired logic. Group blocks allow encapsulating specific logic and saving them as library blocks for future reuse. To enhance organization, bookmarks can be added, and a search mechanism is available for quick navigation through blocks and sections. ADL incorporates predictive techniques to detect potential block connections, further expediting the development process.
As the presentation continues, the instructor demonstrates the step-by-step creation of algorithms using ADL. The platform offers real-time feedback and user-friendly features to aid in efficient development. The instructor showcases the addition of entry side logic to an algorithm, followed by the incorporation of exit side logic, and finally the creation of an algorithm with both entry and exit side logic. Various blocks such as order blocks, message info extractors, field blocks, and alert blocks are utilized to define the desired functionality of the algorithms. Throughout the demonstration, the instructor highlights the readability and customization options provided by jump blocks, allowing traders to tailor their algorithms according to their preferences.
The instructor then introduces the Order Management Algo (OMA), which enables applying algorithmic logic to existing orders, providing flexibility to manipulate price, quantity, stop price, and disclosed quantity as needed. They explain how the bid drifter strategy can be implemented, incrementally increasing the price in intervals until the order is filled. The instructor emphasizes that ADL is designed to prevent unintended actions and infinite loops, ensuring user safety and expected behavior. Furthermore, ADL incorporates a P&L risk block feature that allows traders to set predefined loss thresholds, automatically stopping the algorithm if losses exceed the specified amount.
The presenters discuss the launch and monitoring of algorithms using ADL. Algol launching can be initiated from various widgets within the front-end Auto Trader algo dashboard, order book, or MD Trader. The one-click launching capability directly from the MD Trader ladder is highlighted, enabling traders to choose instruments and modify algo parameters effortlessly. ADL also provides the ability to select colocation facilities based on the instrument, and traders can monitor the progress of their algorithms directly from the front end. Additionally, the platform supports specifying different accounts for each instrument when launching algorithms, enhancing flexibility and account management options.
The presenters emphasize the availability of resources for learning more about ADL on the Trading Technologies website, including a support forum for discussing ADL-related topics. They inform the audience about the upcoming addition of an analytics block, allowing extraction of historical data and performing built-in studies within ADL. Users will have the capability to build custom studies using historical data directly within the algorithm. The presenters highlight that Trading Technologies is broker-neutral, enabling connection to any broker that supports the platform. Pricing details are also mentioned, and the stacker outputs algorithm type is identified as a common use case.
The speakers delve into the versatility of writing algorithms using ADL, emphasizing that each trader can bring their unique "secret sauce" to algorithmic trading. They recommend the Trading Technologies community forum as an excellent resource for obtaining additional information and insights on popular algorithmic strategies. The advantages of single-click launch with autotraders are explained, allowing traders to model multiple trades simultaneously. They also mention the availability of the ADL dashboard on mobile apps, enabling traders to pause and restart algorithms remotely.
The presentation proceeds with a discussion on accessing the ADL platform through a free demo account on the TradeTT site, providing immediate access and an opportunity to explore the platform's capabilities. It is highlighted that ADL is co-located with major exchanges, offering a pool of servers located in facilities across various locations, including a gen-pop server for users to experiment with different trades. The speakers also touch upon web services and APIs, mentioning the release of the TT REST API and the utility of the ADL platform for forex trading.
Regarding foreign exchange trading options, the speakers clarify that while there are no immediate plans to connect directly with forex exchanges, forex features are available on the CME, and NYSE offers a spot forex contract. They encourage audience members to engage in the forums, which track and address product enhancements. The conclusion includes a preview of the back program and a request for attendees to fill out a survey form before concluding the webinar session.
Quantitative Finance | Introduction to Machine Learning | Quantiacs | By Eric Hamer
Quantitative Finance | Introduction to Machine Learning | Quantiacs | By Eric Hamer
Eric Hamer, the CTO of Quantiacs, introduces the partnership between Quantiacs and Quantinsti, aiming to democratize the hedge fund industry. This collaboration provides training sessions that equip students with practical skills using Quantiacs' open-source tools and data. Quantiacs functions as a crowd-sourced hedge fund, connecting quantitative analysts who develop algorithms with capital, while Quantinsti offers courses in algorithmic trading. Hamer highlights that participating quants can compete in Quantiacs competitions, where they have the opportunity to win investment capital and a share of the profits.
Hamer delves into how Quantiacs connects coders' algorithms to capital markets, benefiting both the quant and Quantiacs if the strategies prove successful. Quantiacs strives to promote quantitative trading by offering downloadable desktop toolkits for MATLAB and Python, sample trading strategies, and free end-of-day futures data dating back to 1990. They have also incorporated macroeconomic indicators to assist clients in improving their algorithms. Moreover, Quantiacs provides an online platform where users can submit and evaluate their algorithms at no cost. Currently focused on futures, Quantiacs aims to potentially provide comparable data for equity markets in the future.
The speaker explains the two primary functions of trading strategies in the Quantiacs platform: the cost function and the trading system. The cost function accounts for transaction costs and commissions by utilizing 5% of the difference between the high and low prices of a given day. On the other hand, the trading system allows users to request price information and provide a weight vector or matrix that determines portfolio allocation. Quantiacs discourages the use of global variables and offers a settings parameter to maintain necessary state information. Hamer provides an example of a simple trading strategy that has yielded a 2.5% annual return. The strategy's output includes an equity curve, performance of long and short positions, and individual futures performance. Quantiacs evaluates strategies based on positive performance, low volatility, and the Sharpe ratio, which measures risk-adjusted returns.
The concept of machine learning and its applications in quantitative finance are introduced by Hamer. He highlights that a significant portion of trades on American stock exchanges, approximately 85% to 90%, are computer-generated. Machine learning techniques such as regression, classification, and clustering are becoming increasingly prevalent in the field. Hamer discusses some pitfalls associated with machine learning, emphasizing the importance of maximizing risk-adjusted returns without excessive trading. While neural networks can yield excellent results, their execution times can be lengthy, and traditional CPU architecture may not be optimal. However, high-performing GPUs are available, significantly reducing execution time. Although open-source libraries like Python and MATLAB exist, setting up and training a machine learning algorithm can be a complex process requiring effort and dedication.
Hamer delves into the process of machine learning, starting with specifying the problem statement and identifying the type of machine learning problem. He explains the requirement for numeric data in machine learning and discusses the division of data into training and test sets for model training and evaluation, respectively. Hamer provides an example demonstrating how the Quantiacs Python API can be utilized to make predictions on the mini S&P 500 futures contract, displaying the results using the Keras neural network API.
The limitations of the machine learning model created for predicting future stock prices are discussed by Hamer. While the model may initially appear to accurately predict prices, closer inspection reveals that it is merely using today's data as a proxy for tomorrow's data. When applying the same algorithm to raw data returns, the model's predictions follow a similar shape but not the same magnitude as the true values. Hamer demonstrates the poor performance of the model when applied to trading data and explores potential avenues for improvement. He also provides a brief overview of the source code used in his trading system function.
Hamer proceeds to demonstrate the creation of a sequential Keras model for predicting S&P 500 futures returns. The model begins with a basic structure and incorporates specific layers. Hamer trains the model using training data, which comprises actual price data, while the y-values represent the return data to be predicted. Once trained, Hamer can extract the model from the settings and use it to predict returns based on the most recent data. While his simple S&P 500 mini model does not perform well, Hamer explains that proper techniques and optimizations such as gradient descent and boosting can solve the problem.
Techniques to enhance the validity of a machine learning algorithm in quantitative finance are discussed by Hamer. He suggests using the bootstrap aggregating technique, which involves running the algorithm on multiple subsets of the data to gain insights. Keeping strategies simple, utilizing multiple predictions to reach consensus, and being cautious of overfitting, data cleaning, and handling missing data and random variables are also recommended. Hamer believes that machine learning and artificial intelligence will continue to be crucial tools for forecasting financial markets.
The speaker introduces the EpAT and ConTA courses, both offering dedicated sessions on machine learning. EpAT caters to professionals seeking growth in the algo or quantitative trading field, while ConTA provides a self-paced course on implementing regression techniques using machine learning with Python. Hamer responds to questions regarding the choice between R and Python for machine learning and offers advice on avoiding overfitting when testing alternative datasets. He suggests training the model on both training and testing data and examining the difference in error between the two sets to prevent overfitting.
Hamer highlights the dangers of overfitting in machine learning for algo trading and suggests employing the bootstrap aggregation or bagging technique to split a dataset into smaller subsets for accuracy testing. Due to the noise and fluctuations in financial data, anything above 50% accuracy can be considered good.
Finally, Hamer emphasizes the significance of understanding technology to automate trading strategies. He stresses the need for education programs that provide training in the diverse skills required to succeed as an algorithmic trader.
based on positive performance, low volatility, and the Sharpe ratio, which measures risk-adjusted returns.
Can we use Mixture Models to Predict Market Bottoms? by Brian Christopher - 25th April 2017
Can we use Mixture Models to Predict Market Bottoms? by Brian Christopher - 25th April 2017
Brian Christopher, a quantitative researcher and Python developer, delivers a comprehensive presentation on the limitations of traditional time series analysis and introduces mixture models, specifically hidden Markov models (HMMs), as a promising alternative for predicting returns and identifying market regimes. He emphasizes the need for models that can handle non-stationary data and approximate nonlinear distributions, which are essential in financial forecasting.
Christopher explores how mixture models, particularly HMMs, can be used to estimate an asset's most likely regime, along with the associated means and variances for each regime. He explains the computational process, which involves alternating between computing class parameters and evaluating likelihood data. The Gaussian mixture model (GMM), a well-known mixture model, assumes that each regime follows a Gaussian distribution. Christopher demonstrates how the expectation-maximization algorithm is employed to calculate probabilities and regime parameters until convergence. To illustrate this, he showcases an example of classifying a spy ETF's low volatility, neutral, and high volatility regimes.
Next, Christopher delves into how GMMs can handle non-stationary and nonlinear datasets, overcoming the limitations of traditional time series analysis. He presents a toy strategy that utilizes four factors, including asset returns and the US treasury ten-year to three-month spread, to estimate sequence returns and parameters. GMMs are used to fit and predict, extracting the last regime label's estimate to determine the specific regime's mean and variance. Instead of assuming a normal distribution, the Johnson su distribution is utilized as part of the strategy to account for the nonlinear nature of the data.
The speaker discusses a strategy for predicting market bottoms based on the assumption that returns outside of confidence intervals are outliers. By constructing 99% confidence intervals through a thousand samples, returns below the lower confidence interval are considered outliers. Christopher analyzes the returns after the outlier event, assuming a long-only or buy position in the ETF for a specified number of days. The model adapts to changing volatility, and while the overall accuracy is around 73%, the equity curve does not perform as well as a buy-and-hold strategy. Christopher encourages the audience to explore the data themselves, as the datasets used in the presentation are available on GitHub.
Christopher shares his analysis of using mixture models to predict market bottoms for various ETFs. He examines the distribution of median returns for each ETF across different look-back and holding time periods. SPY, Triple Q, and TLT consistently outperform in different dimensions, while GLD, EFA, and EEM exhibit more symmetric distributions. He also evaluates the sum ratio, which measures the total returns of events greater than 0 divided by returns less than 0, considering values greater than 1 as successful. SPY, Triple Q, and TLT show strong performance across multiple dimensions and look-back periods. However, Christopher cautions that longer holding periods may be more influenced by the overall market trend.
The presenter discusses the performance of different assets in the market using mixture models to predict market bottoms. The study reveals that assets such as SPY, Triple Q, TLT, and GLD perform well depending on variables such as the number of steps or the look-back period. However, the performance of certain assets deteriorates with longer holding periods. The study evaluates median returns across different components and identifies promising results for assets like EEM and Aoife. The importance of proper sampling distribution is emphasized, and the use of the Johnson su distribution is shown to be effective. Overall, the strategy utilizing mixture models to predict market bottoms proves to be compelling.
Christopher explains that while GMM has consistently shown success with assets like SPY, Triple Q, and TLT, there are alternative strategies that perform equally or better. He briefly discusses the code for the model runner class and the run model convenience function, which implements the GMM components. He emphasizes that the model was implemented in a walk-forward fashion to avoid look-ahead bias. Additionally, Christopher provides the data he used in HDF5 format on GitHub.
The speaker explains how to organize and analyze the outputted data to assess the effectiveness of the mixture model strategy. Various slicing and grouping techniques can be employed to evaluate metrics and means. The Johnson su distribution is used to adapt to changing volatility in the return series and is compared to the normal distribution. Christopher suggests that the accuracy of the normal distribution is poor and that it may be more beneficial to simply hold the market. However, he encourages individuals to explore the data on GitHub and offers to address any questions or participate in a webinar.
During the Q&A session, Christopher answers audience questions regarding his webinar on using mixture models to predict market bottoms. He clarifies that he determined the shape parameters for the Johnson distribution through a coarse parameter search and did not extensively research the results. He also discusses how he selected helpful factors for his model, highlighting the inclusion of US-based interests or fixed income metrics to enhance the model's success in predicting US-based asset returns.
Christopher addresses additional audience questions regarding the application of GMM to returns instead of price, the issue of scale when using price, the bias-variance problem with multiple factors, and the similarity between look-back and back-testing. He suggests further exploration and research on combinations of factors that are more predictive across a wider range of assets. He also emphasizes the importance of setting a natural limit to the number of GMM components to avoid overfitting. Christopher invites the audience to reach out to him for further questions and details.
Implied Volatility From Theory to Practice by Arnav Sheth - 7 March, 2017
Implied Volatility From Theory to Practice by Arnav Sheth - 7 March, 2017
Arnav Sheth, an esteemed professor with extensive knowledge of volatility, takes the stage as the speaker of a webinar titled "Implied Volatility From Theory to Practice." The host introduces Sheth, highlighting his expertise in the field, including his book publication and founding of a consultancy and analytical platform. The webinar aims to provide attendees with a comprehensive understanding of implied volatility, different types of volatility, trading strategies exploiting implied volatility, and available online resources and Chicago Board Options Exchange (CBOE) indexes for further exploration.
Sheth begins by offering a concise overview of options, covering various volatilities such as historical and implied volatility. He dives into one trading strategy in detail and discusses a couple of CBOE indexes, providing practical insights into their application. To provide a historical context, Sheth shares the origins of options, tracing back to the first recorded options contract around 500 BC. He recounts the story of Thales, a mathematician and philosopher, who secured exclusive rights to all olive presses during a bountiful harvest. This tale illustrates the early manifestation of options trading.
Moving into the modern definition of options, Sheth clarifies the concept of call options, describing them as contracts that allow speculation or hedging on the future of an underlying asset. He emphasizes that call options provide the recipient with the right, but not the obligation, to exit the contract. Sheth proceeds to explain the basics of call and put options trading, highlighting that a call option grants the buyer the right to buy an underlying asset at a specified price, while a put option gives the buyer the right to sell the underlying asset at a predetermined price. He underscores that options trading is a zero-sum game, meaning that for every winner, there is a loser, resulting in total profits and losses equating to zero. Sheth warns about the risks of selling a call option without owning the underlying stock but notes that if one owns the stock, selling a call can help mitigate risk.
Sheth delves further into option contracts, covering long call, short call, long put, and short put options. He explains their potential profit and loss outcomes, cautioning against engaging in "naked options" trading for beginners. Moreover, he emphasizes the significance of accounting for the time value of money when calculating profit versus payoff. Sheth distinguishes between European and American options, clarifying that European options can only be exercised at expiration, while American options can be exercised at any time. He concludes this section by introducing the Black-Scholes-Merton pricing model, which he likens to a "levered stock purchase."
The focus then shifts to the Black-Scholes-Merton (BSM) model and its underlying assumptions. Sheth highlights one of these assumptions, stating that the volatility of returns is known and remains constant throughout the option's lifespan. He proceeds to discuss historical volatility, which represents the standard deviation of historical asset returns. Sheth explains its importance in predicting the potential profitability of an option, highlighting that higher volatility increases the option price due to a greater probability of the asset ending up "in the money."
Next, Sheth explores implied volatility and its role in reverse-engineering volatility from the Black-Scholes model using market options. Implied volatility is interpreted as the market's expected volatility and is calculated based on market option prices. Sheth introduces the VIX, which utilizes 30-day maturity at-the-money S&P 500 options to estimate implied volatility. The VIX measures the volatility that the market anticipates during the option's expiration period. He notes that traders often use implied volatility, derived from option prices, to price options rather than the other way around. Sheth emphasizes that if different strikes are associated with the same underlying asset, their implied volatility should remain constant.
Sheth proceeds to explain the concept of volatility skew in options pricing. He demonstrates how implied volatility deviates from historical volatility as the strike price diverges, resulting in the volatility skew. Sheth highlights that the skew emerged after 1987 and presents an opportunity for traders, as it is reflected in options prices. He introduces the term "volatility risk premium," which represents the difference between implied and realized volatility. This premium can be exploited in trading strategies. Sheth clarifies that while the Black-Scholes model is primarily used to price options, it is more commonly utilized to obtain implied volatility.
The calculation of implied volatility in the options market becomes the next topic of discussion. Sheth explains how traders utilize market values of specific options on underlying assets and input these values into the Black-Scholes model to reverse engineer volatility. Implied volatility is then interpreted as the expected volatility by options markets for a specified period, often 30 days. Sheth introduces the concept of the volatility risk premium, showcasing how options markets tend to overestimate actual volatility. He concludes this section by presenting a frequency distribution of the volatility premium.
The speaker delves into trading strategies based on implied volatility, focusing on the concept of selling straddles. Sheth highlights that implied volatility is typically higher than realized volatility, resulting in overpriced options. As a result, the strategy involves selling straddles and going short on volatility. To assess the risks associated with these strategies, Sheth introduces Greek measurements, which provide a framework for evaluating risk. He offers an example scenario involving the purchase of an at-the-money straddle and discusses the profit and loss outcomes based on the underlying stock price. Sheth concludes by cautioning that if the stock price fluctuates significantly, options pricing may no longer be sensitive to volatility.
The video proceeds to discuss the use of options as a hedge against changes in stock prices. Sheth explains that by simultaneously purchasing a call and a put, or selling both, closest to the value of the stock price, delta neutrality can be achieved, but vega cannot be fully hedged. Sheth then introduces CBOE indexes as a convenient way to capitalize on the volatility premium, specifically mentioning the BXM (BuyWrite Monthly) index, which involves a covered call strategy, and the BFLY iron butterfly option. He explains that writing covered calls on the owned stock can reduce the risk associated with solely holding the underlying stock, but it also carries the possibility of losing the stock if it is called. Lastly, Sheth explains the strategy of the iron butterfly, which entails buying and selling four options with three strikes against the S&P 500.
Towards the end of the webinar, Sheth presents a strategy involving the purchase of an out-of-the-money put and an out-of-the-money call. This strategy results in a short volatility position similar to a reverse straddle, but with slightly exaggerated payoff to increase profit potential.
How to Use Financial Market Data for Fundamental and Quantitative Analysis - 21st Feb 2017
How to Use Financial Market Data for Fundamental and Quantitative Analysis - 21st Feb 2017
Speakers:
Learn to trade fundamentals profitably, understand the challenges surrounding High-frequency data analysis, discover the opportunities and gotchas in Futures trading, and view a live demonstration of a step-by-step tutorial on one of the most popular trading strategies, the Pairs trading strategy!
Informative Session on Algorithmic Trading
Informative Session on Algorithmic Trading
In the opening of the informative session on algorithmic trading, the speaker expresses gratitude for the growing interest in this domain and acknowledges the significant impact it has had over the years. They introduce Nitesh, the co-founder of IH and Quant Institute, as the speaker for the session. Nitesh is described as having rich experience in financial markets and will provide an overview of algorithmic trading, trends, and opportunities, particularly for beginners. The speaker highlights recent news articles that demonstrate the increasing popularity of algorithmic trading and its projected growth rate of over 10% CAGR globally in the next five years.
The speaker dives into the growth and opportunities in algorithmic trading, emphasizing its rapid expansion with double-digit percentage numbers worldwide. They present data from different exchanges, showcasing the increasing volumes of algorithmic trading in equity and commodity markets. To define algorithmic trading, they explain it as the process of using computers programmed with a defined set of instructions to place trading orders at high speed and frequency, aiming to generate profits. The critical role of technology in algorithmic trading is emphasized, especially in high-frequency trading, where it accounts for a significant portion (up to 60-70%) of a trading strategy's profitability.
Moving on to the key aspects of algorithmic trading, the speaker discusses technology, infrastructure, and strategy. They highlight the prominent role of technology in today's algorithmic trading world, with technocrats and technology-oriented traders leading the way. Infrastructure is identified as a crucial factor that defines a trader's success probability, emphasizing the importance of the type of infrastructure used. Lastly, the speaker explains that the trading strategy itself is what ultimately determines profitability and success, accounting for 30-70% of a trader's overall success probability. They outline the different phases of strategy development, including ideation, modeling, optimization, and execution.
The stages of algorithmic trading, such as optimization, testing, and execution, are described by the speaker. They stress the importance of optimizing the input variables of a trading model to ensure consistent output before moving forward with execution. Additionally, when automating execution, the speaker cautions about potential risks and highlights the need for a robust risk management system to ensure safety and prevent operational risks. They mention that quotes on the leg statistically lead to major gains and higher returns per trades.
The risks involved in algorithmic trading are discussed, including the potential for significant losses, and the importance of operational risk management is emphasized. The speaker also highlights the infrastructure required for algorithmic trading, such as high-speed lines and collocations, which allow for faster execution. The practical steps of setting up an algorithmic trading desk are explained, starting with market access through obtaining a membership or opening an account with a broker. The speaker mentions that licensing requirements may vary depending on the regulator. Choosing the right algorithmic trading platform is crucial and depends on the specific strategy to be executed.
Algorithmic trading platforms and their selection based on the type of strategy are discussed by the speaker. For low-frequency trading strategies, brokers often provide free, web-based platforms that allow for automated trading using API code in various programming languages. For higher sensitivity to latency, deployable platforms can be used at a cost of a few hundred dollars per month. The speaker also emphasizes that the type of infrastructure used depends on the strategy, with high-frequency data and analysis requiring top-class performance servers.
The speaker elaborates on different types of access and infrastructure required for algorithmic trading, considering various regulations and technologies. They explain the concept of co-location and proximity hosting, highlighting factors like latency, order routing lines, and market data. The importance of having a robust database and analytics for strategy optimization is emphasized, especially when dealing with large amounts of tick-by-tick data. The cost of access to these tools and the level of data usage required for different trading strategies are explored.
The speaker explains that algorithmic trading demands more sophisticated tools than Excel, such as R or Matlab, for data processing and model building. They also mention the increased compliance and audit requirements that come with automation, which is a global trend. Traders are advised to ensure that their transactions are auditable, their codes and strategies have proper protection against edge cases or runaway cases, and have etiquette protection in place. It is also recommended to have a team with a basic understanding of analytics, technology, and financial markets, with at least one team member specializing in all three areas. This is compared to the conventional trading success recipe, which required skills like number crunching, pattern recognition, typing speed, financial market understanding, and discipline.
The speaker discusses the success recipe for quantitative trading using algorithmic trading. They emphasize the need for a strong mathematical and statistical understanding, as well as proficiency in financial computing. Understanding technology and market structure is crucial, along with an overall comprehension of how hardware functions and networks play a role in trading success. Financial market understanding is also essential, and knowing how to code and model a strategy is an added advantage. For those setting up higher frequency shops, all of these elements are vital. The speaker highlights the importance of EPAT for individuals entering the trading world, especially since many individuals in finance lack the necessary technology understanding for success.
The speaker talks about addressing the lack of understanding in technology among quantitative analysis tools required for trading. They mention the creation of the ePACT (Executive Program in Algorithmic Trading) for working professionals who want to gain expertise in algorithmic trading. The ePACT program is a six-month integrated online program that includes weekend classes for four to four and a half months, followed by an additional one and a half to two months of project work. The project work allows participants to specialize in their chosen domain. The program consists of nine different modules taught by industry practitioners to ensure the material covered aligns with industry needs and trends.
The various modules of the ePACT program are discussed, starting with an introduction to the financial market, basic statistics, derivatives and risk, advanced statistics, and quantitative trading strategy. The quantitative trading strategy module covers various trading strategies and also includes topics related to setting up an algorithmic trading desk and considering the business aspects involved. The program also covers the implementation of algorithmic trading platforms using Python, providing instruction on the basics of Python and how to implement trading strategies on different platforms. Participants are assigned a mentor to oversee their project work, which acts as a specialization within their chosen domain.
The speaker discusses the support services provided by the career services team to participants and alumni of the algorithmic trading program. They highlight the significance of learning by doing, live lectures, and access to recorded lectures. The speaker presents a graph showing industry requirements and the profiles companies are seeking in applicants, ensuring that the program covers relevant topics. They mention that the program has industry leaders as instructors from different countries and that their alumni are based in over 30 countries worldwide. The various events and programs organized by the institute to increase awareness of algorithmic trading are also highlighted.
The speaker proceeds to answer various questions from the viewers related to algorithmic trading. They confirm that U.S. citizens can open trading accounts in India but need to go through a custodian and follow a specific process to open an account with a clearing broker. The speaker recommends books by Dr. Ap Chan and Larry Harris for those interested in setting up an algorithmic trading desk or starting with algo trading. They also mention several platforms available in India for algorithmic trading, such as Symphony Fintech, Automated Trading, and YouTrade, among others. Real technical data can be obtained either directly from the exchange or through one's broker. Additionally, they confirm that students can take the same strategy they developed in the course and apply it to live trading.
The speaker continues to answer various questions from viewers regarding algorithmic trading. They explain that coding and backtesting a strategy using different tools is possible and not difficult to port to live trading. Questions regarding regulations, compliance, and licensing for trading in the Indian market are also addressed. The speaker explains that permission is required from the exchange for eligible automated trading strategies and that a demo is necessary. They also discuss popular trading strategies, such as momentum-based, statistical arbitrage, and machine learning-based strategies.
The speaker discusses the types of trading strategies covered in the course and emphasizes the importance of learning how to develop new strategies, test them, and execute them. They answer questions about job prospects for course graduates, average salaries offered, and the programming skills required to analyze candlestick patterns. Concerns about knowledge level and time commitment for working professionals taking the course, as well as the costs associated with setting up an algorithmic trading desk in India, are also addressed. The speaker emphasizes the importance of having a basic understanding of key concepts before starting the program to maximize its value.
The speaker answers various questions related to algorithmic trading, suggesting that individuals with limited knowledge of stock markets can contact a sales specialist for guidance to gain a basic understanding of these domains before proceeding with the course. They explain that algorithmic trading is useful to individual traders who want to ensure discipline in their trades and scale up their strategies to include multiple instruments. The speaker also addresses concerns regarding transitioning from one course to another and brokers in India who offer algo trading services. Finally, they explain that server colocation at an exchange does not provide undue advantage to algorithmic traders but benefits retail traders by providing tighter bid-ask spreads.
The speaker discusses the benefits of algorithmic trading for retail traders and how technology can help minimize losses. They address questions about non-programmers learning Python for algorithmic trading and whether Indian residents can trade in global markets. They clarify that their firm primarily focuses on education rather than providing brokerage or algorithmic trading platforms. The speaker emphasizes that their program has helped hundreds of participants from over 30 countries and encourages interested individuals to contact their business development and sales teams for more information.
The speaker addresses several questions from viewers, including whether all strategies need to be approved by the exchange and how to protect a strategy. They explain that algo providers cannot see a trader's strategy, and exchanges are primarily concerned with ensuring strategies don't cause market havoc. They mention a student discount for the program and discuss the availability of algo trading in commodities markets in India. Furthermore, they highlight the importance of linear algebra and probability distribution in HFT profiles, depending on the role, and emphasize that algo trading can be applied worldwide to any trading instrument, including options and forex.
The speakers discuss coding strategies, providing reusable code, and the necessity of learning Python and R. They also answer questions regarding the validation of strategies, potential ROI, and the necessary infrastructure for a moderate number of traders. The speakers caution against sharing strategies with others and suggest focusing on learning best practices and developing unique trading strategy ideas.
The speakers answer various questions on algorithmic trading, including the ideal time frame for backtesting a strategy, the minimum internet bandwidth required for moderate-volume trading, and how to bypass brokerage obtaining. They also discuss the best vendors for algorithmic trading in India and whether discretionary trading strategies like the Elliot wave theory can be programmed. The speakers suggest that any strategy can be coded if one is comfortable with programming and has clear rules in mind. They advise traders to choose vendors based on their individual requirements and the pros and cons of each vendor.
In conclusion, the speaker thanks the attendees and offers further assistance. Although they were unable to answer all the questions due to time constraints, the speaker encourages the audience to send in their inquiries and provides contact information for the Quant Institute team. They express their appreciation for the interest in algorithmic trading and emphasize the importance of continuous learning and practice in this field.
Impact of Brexit and Recent Market Events on Algorithmic Trading - July 19, 2016
Impact of Brexit and Recent Market Events on Algorithmic Trading - July 19, 2016
Nitesh Khandelwal brings a wealth of experience in the financial markets, having worked across various asset classes in different roles. He is the co-founder of iRageCapital Advisory Private Limited, a reputable company that specializes in providing Algorithmic Trading technology and strategy services in India. Nitesh played a pivotal role in driving the business aspects of iRageCapital and QuantInsti. At QuantInsti, he also served as the head of the derivatives and inter-market studies training department. Currently, he holds the position of Director at iRage Global Advisory Services Pte Ltd in Singapore. Nitesh has a background in bank treasury, with expertise in the FX and interest rate domains, as well as experience in proprietary trading desks. He holds a Bachelor's degree in Electrical Engineering from IIT Kanpur and a Post-Graduation in Management from IIM Lucknow.
Recent global events such as Brexit and the resulting volatility in the currency market have caused significant concern among investors. It is natural for risk aversion to increase after such events, as market participants exercise caution in their trading activities. However, even during such turbulent times, automated traders are thriving. Media reports indicate that hedge funds employing algorithmic trading consistently outperform manual traders, particularly in stressful market conditions.
Informative Session Contents:
Analysis of the Biggest Trading Events of the Season
Requirements for Becoming a Quant/Algo Trader
Quantitative Trading using Sentiment Analysis | By Rajib Ranjan Borah
Quantitative Trading using Sentiment Analysis | By Rajib Ranjan Borah
Sentiment Analysis. also known as opinion mining, is the process of computationally identifying and categorizing opinions expressed in a piece of text, especially in order to determine whether the writer’s attitude towards a particular topic, product, etc. is positive, negative, or neutral.
Informative Session about Algorithmic Trading by Nitesh Khandelwal - May 24, 2016
Informative Session about Algorithmic Trading by Nitesh Khandelwal - May 24, 2016
Session Contents:
Leveraging Artificial Intelligence to Build Algorithmic Trading Strategies
Leveraging Artificial Intelligence to Build Algorithmic Trading Strategies
The CEO and co-founder of a trading strategy development company explains the exciting potential of AI and machine learning in algo trading. These tools have been proven successful by large quantitative hedge funds, and their accessibility has increased significantly thanks to open-source libraries and user-friendly tools that don't require strong math or computer science backgrounds. The speaker also introduces key terms related to AI and machine learning in the context of algorithmic trading. Artificial intelligence is defined as the study of intelligent agents that perceive their environment and take action to maximize success. Machine learning, a subset of AI, focuses on algorithms that can learn and make predictions without explicit programming. Pattern recognition, a branch of machine learning, involves uncovering patterns in data, while association rule learning involves forming if-then statements based on those patterns. The speaker briefly mentions the concept of Big Data, which is characterized by its four V's: volume, velocity, variety, and veracity.
The presenter outlines the terms and concepts to be discussed, including big data, veracity, artificial intelligence, machine learning, pattern recognition, and data mining. They then delve into best practices and common pitfalls when building algorithmic trading strategies. These include defining tangible objectives for success, prioritizing simplicity over complexity, focusing on creating a robust process and workflow instead of relying on a single model, and maintaining a healthy skepticism throughout the entire process to avoid biased results.
The speaker proceeds to discuss how machine learning can address the challenge of selecting indicators and data sets for building trading strategies. Decision trees and random forests are introduced as techniques to identify important indicators by searching for the best data splits. Random forests are noted to be more robust and powerful than decision trees, albeit more complex. The speaker also explores how combining indicator sets using a technique called "wrapper" can create a more powerful combination.
Next, the speaker discusses the use of technical indicators in algorithmic trading strategies and their benefits in identifying underlying patterns and trends. The question of optimizing indicator parameters based on machine learning is raised, and the concept of ensemble learning is introduced, which combines multiple classifiers to analyze data and uncover different patterns and information. The distinction between feature selection and feature extraction in machine learning is also mentioned, with a reminder to be mindful of curve fitting when utilizing multiple classifiers.
The presenters demonstrate the combination of pattern recognition and association rule learning as a way to leverage machine learning algorithms while still maintaining interpretability for trading strategies. They provide an example using a support vector machine to analyze the relationship between a three-period RSI and the price difference between the open price and a 50-period SMA on the Aussie USD. Clear patterns are translated into trading rules. However, they acknowledge the limitations of this method, such as analyzing high-dimensional data, automation challenges, and interpreting the output. The speaker introduces Trade as a possible solution to address these concerns and allow traders to leverage algorithms with any indicators they desire.
The presenter proceeds to demonstrate how to build trading strategies using a cloud-based trade platform. They use the example of building a strategy for trading the Aussie USD on a daily chart using five years of data. To avoid curve fitting, the algorithm is trained only until January 1, 2015, leaving a year of out-of-sample data for testing. The importance of not wasting this out-of-sample data to avoid biased backtesting is emphasized. Using machine learning algorithms for indicator analysis and pattern identification is presented as a flexible and powerful approach to optimizing trading strategies.
The presenter continues by demonstrating the process of building a trading strategy using Trade-Ideas' platform and the open-source indicator library TA Lib. They analyze the price movement of Aussie USD over a five-year span, identify ranges with strong signals, and refine rules for going long by selecting indicator ranges and noting their relationships. By adding a rule for price relative to a 50-period SMA, they identify two different ranges with strong signals. The advantage of using Trade-Ideas is highlighted, as it allows for analysis of machine learning algorithm results and building rules directly from histograms for clearer interpretation.
The presenter discusses the procedure for building short rules for a trading strategy, including selecting the right indicators and refining rules to find strong short signals. Testing and exploring different patterns with the indicators are emphasized to find the optimal strategy. Generating code and testing the strategy out-of-sample in MetaTrader4, with the inclusion of transaction costs, is also demonstrated. The presenter confirms that the approach is related to algorithmic trading.
The speaker explains how to test the strategy built on the most recent out-of-sample data, which was not used during the strategy building process. The simulation is conducted using MetaTrader, a popular trading platform for currencies and equities. The platform's active community of developers creates automated strategies, custom indicators, and provides an excellent opportunity for testing and trading on the same data. The focus of the simulation is to assess the strategy's performance on out-of-sample data. The speaker mentions that the tool is developed by a startup planning to make it available for free by white-labeling it directly to brokerages.
The speaker addresses the incorporation of risk and money management techniques into a strategy after backtesting. Simple take profit and stop-loss measures are discussed as ways to decrease drawdowns and protect against downside risks. To guard against curve fitting, the speaker emphasizes the use of wide bin selections, out-of-sample testing, and demo accounts before going live. The preference for simplicity and transparency over black box neural networks in trading strategies is also mentioned.
During the presentation, the speaker addresses questions regarding the comparison of their platform to others, such as Quanto Pian or Quanto Connect, highlighting that their platform focuses more on strategy discovery and analysis rather than automating existing strategies. The importance of technical data in automated strategies is acknowledged, while also noting that their platform includes other datasets, such as sentiment indicators. MetaTrader 4 is demonstrated as a useful tool, and the significance of risk and money management strategies in trading is discussed. The speaker also covers best practices and common pitfalls in automated trading strategies.
The speaker discusses the use of indicators in trading strategies, emphasizing the trade-off between complexity and overfitting. They recommend using three to five indicators per strategy to strike a balance between containing sufficient information and avoiding overfitting. The importance of the data or feature fed into the algorithm and how the output is implemented is highlighted. The underlying algorithm is considered less crucial than the indicators used and their implementation. Questions about using the genetic optimizer in MetaTrader 4 and the importance of aligning indicators with the platform are also addressed.
The speaker explores the application of machine learning in value investing. The same process discussed earlier for algorithmic trading can be applied to value investing, but instead of technical indicators, datasets that quantify the inherent value of a company are used. Market cap or price-earnings ratio, for example, can reveal the relationship between these data and the price movement of the asset. Optimizing for return per trade and identifying when an algorithm is out of sync with the market are also discussed. Python and R are recommended as suitable programming languages, depending on one's coding experience and background.
Lastly, the speaker highlights the essential skills and knowledge required for algorithmic trading, which involve merging finance and technology. Understanding the markets, big data statistics, and technology for automating strategies are crucial. Quantitative education programs are suggested as a means to acquire the necessary training in various operations and skills for becoming a successful algorithmic trader. Python is recommended as a great option for building algorithms.