Quantitative trading - page 6

 

Ernest Chan (Predictnow.ai) - "How to Use Machine Learning for Optimization"



Ernest Chan (Predictnow.ai) - "How to Use Machine Learning for Optimization"

Ernest Chan, the co-founder of Predictnow.ai, delves into the challenges faced by traditional portfolio optimization methods when dealing with regime changes in markets. He suggests that machine learning can provide a solution to this problem. Chan explains how his team applies machine learning techniques to portfolio optimization, with a focus on incorporating time series features that measure various financial aspects such as volatility, prices, and interest rates. By combining the Farmer-French Three Factor model with the understanding that ranking is more crucial than prediction, they aim to achieve optimal portfolio optimization.

Chan goes on to share concrete results of the CBO model's performance and provides examples of clients who have experienced improvements in their portfolio's performance using this approach. He emphasizes that machine learning models have the ability to adapt to regime changes, enabling them to respond effectively to evolving market conditions. Additionally, he discusses how returns for the S&P 500 Index and its components can be computed using a machine learning algorithm that utilizes time series features.

Furthermore, Chan highlights the ensemble approach employed by his team for optimization and speculation. He mentions their "secret sauce" that eliminates the need for extensive computational power. Rather than following a two-step process of predicting regimes and conditioning on their distribution of returns, they utilize visual factors to directly predict the portfolio's performance. Moreover, Chan clarifies that by including a significant portion of the training sample in their algorithm, the expected return aligns with past results.

Dr. Ernest Chan explains the challenges faced by traditional portfolio optimization methods in the presence of regime changes and emphasizes the role of machine learning in addressing this issue. He discusses the application of machine learning techniques, the importance of time series features, and the significance of ranking in achieving optimal portfolio optimization. He shares specific results and client success stories, highlighting the adaptability of machine learning models to changing market conditions. Chan also provides insights into the computation of returns using machine learning algorithms and sheds light on their ensemble approach and unique methodology.

  • 00:00:00 In this section, Ernest Chan discusses traditional portfolio optimization methods and the challenge of regime changes in the market, which means that a portfolio that was optimal in the past may not be optimal in the future. He explains that most methods use past historical returns or information as input and do not take into account regime changes. He suggests that machine learning can help to deal with this problem by making use of big data and all the variables observed across different markets. Machine learning can generate expected returns that are not solely based on historical returns and can therefore be better suited for dealing with regime changes.

  • 00:05:00 In this section, Ernest Chan discusses the concept of regime in finance and how it impacts optimization. He explains that while there are some feasible regimes like bear or bull markets, there are also hidden regimes that cannot be defined explicitly and are constantly changing. These regimes are hard to predict and ruin classical methods of optimization. This understanding led Dr. Chan to develop the technique of conditional portfolio optimization, which can measure and adapt to the current regime using a large number of variables. This technique can improve trading strategy performance under varying market conditions.

  • 00:10:00 In this section, Ernest Chan discusses the use of machine learning for optimization and how to adapt parameters using supervised learning. He explains that in supervised learning, there is a labeled target variable, such as the parameter to be optimized, which is the sharp ratio or the future one-month return of a trading strategy. The input is a combination of market and macroeconomic variables that measure the current regime and the control variables that the trader can adjust, forming a large dataset. To optimize, an exhaustive search of various combinations of the parameters is conducted to find the maximum sharp ratio, which is the conditional parameter optimization. Ernest Chan concludes with a simple example using a toy strategy that illustrates the combination of market features and control features to form one row for input.

  • 00:15:00 In this section, Ernest Chan explains how his team applies machine learning to portfolio optimization. The team uses the same machine learning approach they applied to parameter optimization to tackle this more extensive problem. They use big data as input features and implicit prediction of hidden regimes. Unlike classical portfolio optimization methods that solely rely on past returns and covariance of returns as input data, their method considers the current market condition, technical and fundamental indicators, and macroeconomic indicators to adapt to the regime and find the portfolio that is optimal under the current market condition. The team answers questions from the Q&A, explaining that they do not use simulation but real market data to compute returns given a particular hypothetical capital allocation.

  • 00:20:00 In this section, Ernest Chan explains the type of features used in portfolio optimization through machine learning. He emphasizes that only time series features are used and no cross-sectional features are involved. This method focuses on the characteristics of the portfolio or market regime as a whole, such as factors measuring finance, volatility, prices, and interest rates. While this may seem strange, Chan relates it to the Farmer-French Three Factor Model's explanatory quality. Using machine learning models, the objective is not to predict returns but to rank them accurately to achieve optimal portfolio optimization.

  • 00:25:00 In this section, Ernest Chan discusses the importance of ranking in financial applications and how it can be applied to portfolio optimization. He explains that traditional methods of using machine learning models to predict cross-sectional returns can result in a garbage in, garbage out situation if the return predictions are not accurate in magnitude and sign. However, with the CPO method, which combines the Fama-French Factor model and the notion that ranking is more important than prediction, the optimal solution is much more stable against errors in any step of the program. He also notes that this method can tolerate tremendous errors in machine learning prediction because of ranking.

  • 00:30:00 In this section, Ernest Chan discusses how the effect of behavioral finance can be measured using familiar market metrics such as the net Delta of options buying activities. He explains that his company uses this metric as one of the features in their CBO model, which takes into account the effect of the phenomenon, rather than the phenomenon's core cause. Chan then shares concrete results of the CBO model's performance, including beating the mean-variance method and outperforming traditional assets. Additionally, he provides an example of how the CBO method performed better in allocating weights to growth stocks and large-cap stocks during certain time periods compared to other methods.

  • 00:35:00 In this section, the speaker explains how the classical methods of investing and trading are fixed and not adaptive, unlike the CBO (Combinatorial Bayesian Optimization) method that outperforms classical methods because it can adapt to the market regime, resulting in better performance. The CBO is designed to optimize and recommend transactions, even though the existing portfolio already has transaction costs associated with it. The CBO does not incur additional transaction costs and only recommends buying more or less of particular stocks. The speaker then cites examples of clients who have adopted the CBO method and have seen an improvement in their portfolio's performance.

  • 00:40:00 In this section, Ernest talks about a case study where they were able to achieve positive returns in a stock portfolio despite constraints of zero to 25 percent on each stock. The portfolio consisted of tech stocks and was expected to crash badly in 2022, but their method of allocating 50 percent to cash during that period helped generate returns. When asked about reproducibility of their methods, Ernest explains that while some features like net Delta of options are important input features, they have disclosed a high-level description of them on their website. He also mentions using gradient process decision trees and other algorithms for machine learning, and their approach to defining market regime is through representation using hundreds of features.

  • 00:45:00 In this section, Chan explains how machine learning can be used for optimization by constructing a response variable, such as a Sharpe ratio, and fitting a function F for different scenarios of the control variable for each market state. The label of the supervised learning algorithm is the variable to be maximized, such as the Sharpe ratio, and each portfolio proposal is fed into the predictive formula until the best performing portfolio is found. Chan notes that the complexity of the problem does not scale linearly with the number of assets in the portfolio, but his team has developed an algorithm to manage the problem. The largest portfolio universe they have used is the S&P 500.

  • 00:50:00 In this section, Ernest Chan provides an example of how the machine learning model responds to regime changes by using the cash allocation as an indicator. When the bear market started in 2022, the model stayed in cash most of the time, saving the portfolio from negative returns. He also mentions that the objective function can be set to anything, not just the traditional maximum sharp ratio or return, due to the non-linear nature of the learning algorithm and optimization algorithm, and that constraints such as weights, ESG, and turnover can be applied. The software is flexible and can accommodate any market features that the client provides. Additionally, Chan mentions that the model can deal with stocks with short histories, as it allows for the addition and deletion of components in the portfolio, and the model can be retrained with each rebalance.

  • 00:55:00 In this section, Chan discusses the computation of returns for the S&P 500 Index and its components. He explains that using a machine learning algorithm to compute the portfolio's returns is different from using the Markowitz technique because the machine learning algorithm uses time series features rather than stock returns as input. Chan also notes that the regime change is defined by 180 variables, and daily, monthly, and quarterly measurements are used as features that are fed into a machine learning algorithm, which selects the top-ranked features useful in predicting the future of the portfolio. Finally, Chan reframes the problem as a ranking problem rather than a regression problem, and it can be reframed as a classification problem as well.

  • 01:00:00 In this section, the speaker discusses the possibility of using an ensemble of portfolios instead of just one optimal portfolio, but the research team needs to look into it further. They also confirm that if there are one million combinations of portfolios, the model would need to be trained on one million combinations each day of historical data. However, they mention their "secret sauce" that eliminates the need for such computational power. The speaker also explains that they don't use a two-step process of predicting regimes and then conditioning on that regime's distribution of returns, but instead, use the visual factor to predict the performance of a portfolio directly. They end the discussion by saying that the expected return would be similar to what happened in the past if they included a lot of that training sample in their model.
Ernest Chan (Predictnow.ai) - "How to Use Machine Learning for Optimization"
Ernest Chan (Predictnow.ai) - "How to Use Machine Learning for Optimization"
  • 2023.03.01
  • www.youtube.com
Abstract: Conditional Portfolio Optimization is a portfolio optimization technique that adapts to market regimes via machine learning. Traditional portfolio...
 

Financial Machine Learning - A Practitioner’s Perspective by Dr. Ernest Chan



Financial Machine Learning - A Practitioner’s Perspective by Dr. Ernest Chan

In this informative video, Dr. Ernest Chan delves into the realm of financial machine learning, exploring several key aspects and shedding light on important considerations. He emphasizes the significance of avoiding overfitting and advocates for transparency in models. Furthermore, Dr. Chan highlights the benefits of utilizing non-linear models to predict market behavior. However, he also discusses the limitations of machine learning in the financial market, such as reflexivity and the ever-changing dynamics of the market.

One crucial point Dr. Chan emphasizes is the importance of domain expertise in financial data science. He underscores the need for feature selection to gain a better understanding of the essential variables that influence a model's conclusions. By identifying these important inputs, investors and traders can gain insights into their losses and understand why certain decisions were made.

Dr. Chan also touches upon the application of machine learning in risk management and capital allocation. He suggests finding a niche market and avoiding direct competition with well-funded organizations. By doing so, practitioners can enhance their chances of success in these areas.

Throughout the video, Dr. Chan highlights the advantages and challenges associated with different models and strategies. He notes that while traditional quantitative strategies, such as linear models, are easy to understand and less prone to overfitting, they struggle with non-linear dependence between predictors. In contrast, machine learning models excel at handling non-linear relationships, but their complexity and opacity can pose challenges in interpreting their results and assessing statistical significance.

Dr. Chan also discusses the limitations of using machine learning to predict the financial market. He emphasizes that the market is continually evolving, making it challenging to predict accurately. However, he suggests that machine learning can be successful in predicting private information, such as trading strategies, where competing with identical parameters is less likely.

Additionally, Dr. Chan touches upon the incorporation of fundamental data, including categorical data, into machine learning models. He points out that machine learning models have an advantage over linear regression models in handling both real-value and categorical data. However, he cautions against relying solely on machine learning, stressing that deep domain expertise is still crucial for creating effective features and interpreting data accurately.

In the realm of capital allocation, Dr. Chan highlights how machine learning can provide more sophisticated expected returns, challenging the use of past performance as a sole indicator of future success. He also discusses the nuances of market understanding that machine learning can offer, with probabilities varying daily, unlike static probability distributions from classical statistics.

Dr. Chan concludes by addressing the limitations of deep learning in creating diverse cross-sectional features that require domain expertise. He shares his thoughts on the applicability of reinforcement learning in financial models, noting its potential effectiveness at high frequencies but limitations in longer time scales.

For those interested in further exploring financial machine learning, Dr. Chan recommends his company PredictNow.ai as a valuable resource for no-code financial machine learning expertise.

  • 00:00:00 the fact that they are becoming more widely used. In this section of the video, Dr. Ernest Chan discusses his long history in machine learning and how he has found value in applying it to finance, which he admits has been difficult to do until recently. He advocates for simple models and strategies, such as single factor and linear models, which have worked well for quantitative traders for decades. With the rise of quant trading, these models are becoming less profitable, and Chan explains how he has been able to extract value from machine learning in a way that most people are not doing.

  • 00:05:00 In this section, Dr. Ernest Chan discusses the historical issue of overfitting in financial machine learning; when models had to fit multiple parameters, the risk of overfitting is very high, especially when working with low-frequency financial data like daily financial time series. However, over the years, the advances in machine learning, particularly deep learning, made it possible to overcome overfitting. Techniques such as random forest, cross-validation, dropout, and others have helped reduce overfitting, and other tools like algorithms have made machine learning transparent. The problem with black box trading and the lack of transparency is that not being able to explain why you took certain trades is not acceptable, even if you make money.

  • 00:10:00 In this section, Dr. Ernest Chan discusses the concept of feature selection in machine learning, which can help traders and investors better understand the important variables that lead to a particular machine learning model's conclusion. Feature selection can help investors gain insights into why they lost money or why a model made a wrong decision by identifying the important inputs that lead to that outcome. Dr. Chan also highlights that machine learning can be used more effectively for risk management and capital allocation than as a primary signal generator, as market predictions are subject to reflexivity, which creates difficulties with detecting patterns in the past.

  • 00:15:00 In this section, Dr. Ernest Chan discusses the limitations of using machine learning to predict the financial market, which is constantly evolving and cannot be compared to predicting illnesses like cancer. However, he explains that using machine learning to predict private information, such as trading strategies, can be a successful approach as other hedge funds are not competing with the same exact parameters. He also compares traditional quantitative strategies to machine learning based strategies, noting that machine learning can help with modeling alternative and large datasets through non-linear models.

  • 00:20:00 In this section, Dr. Ernest Chan discusses the benefits of using non-linear models for predicting market behavior. Traditional quant models are easy to understand and are linear, making it hard to overfit, but they cannot handle non-linear dependence between predictors. Machine learning models, on the other hand, can handle non-linear dependence with ease, and their complexity and opacity make them difficult to replicate. Furthermore, machine learning models provide a probability of success, allowing for more informed capital allocation. However, overfitting is a problem with machine learning models, and assessing statistical significance can be difficult. Simulating backtests is an imperfect solution, and the nuances of the market cannot be fully captured in a simulation.

  • 00:25:00 In this section, Dr. Ernest Chan discusses the difference between traditional quantitative strategies and machine learning-based strategies. He explains that it is much harder to simulate the market and create accurate error bars for traditional strategies, making it difficult to assess their effectiveness. On the other hand, machine learning models are easy to generate multiple back tests for, simply by switching the random seed, and every back test potentially gives different results. This randomness allows for easier assessment of the statistical significance of back tests, making it a major advantage of using machine learning in trading. However, financial data science is the most difficult and time-consuming step of constructing a strategy, as there are usually numerous problems with financial data, even from reputable vendors.

  • 00:30:00 In this section, Dr. Ernest Chan outlines some of the problems associated with using sentiment data. He notes that sentiment data cannot always be trusted as companies processing the news can go back and change parameters to look good. Due to the inability to know if data has been looked at with bias, it is necessary to process raw newsletters into new sentiment, introducing a risk. Moreover, data presents a significant challenge in automating responses. The financial data science step is challenging as it requires human intelligence through domain expertise, a paradoxical issue in financial machine learning problems. The second step is machine learning, which the tech industry has already solved. The final step is constructing and backtesting trading strategies, which requires putting together the prediction into a coherent strategy and assessing the statistical significance.

  • 00:35:00 In this section, Dr. Ernest Chan discusses how to convert predictions into a portfolio by following a standard routine, which can be found in finance textbooks. However, this requires some domain expertise and is not completely automatic. He also highlights the difficulties of financial data science, such as making features stationary and the importance of using meta-labeling to predict whether a strategy will be profitable instead of predicting the market. Dr. Chan recommends reading his blog post on meta-labeling applied to finance for further information. He also mentions that Random Forest is the most popular model choice for financial machine learning because it captures non-linearity well and has just the right complexity.

  • 00:40:00 In this section, Dr. Ernest Chan talks about the importance of machine learning in predicting market trends and avoiding losses. He shares his personal experience of using a machine learning model to detect the presence of terrorist activity in the world economy and warns against the risks of not following its advice, as was the case with the Pfizer vaccine announcement. He also emphasizes the significance of feature selection in explaining losses to investors and recommends his own book on machine learning for beginners. Additionally, Dr. Chan highlights the importance of data cleanliness and stationarity in making correct predictions, for which he shares an example of how a non-stationary time series can negatively impact the model's ability to predict accurately.

  • 00:45:00 In this section, Dr. Ernest Chan discusses incorporating fundamental data, particularly categorical data, into machine learning models. While linear regression models cannot handle categorical data, machine learning models can handle both real value and categorical data. However, Dr. Chan emphasizes that machine learning cannot replace human traders entirely, as financial machine learning requires deep domain expertise to create features and correctly interpret data. Additionally, he warns against blind worship of deep learning and stresses that it is not a one-size-fits-all solution without sufficient relevant data. Finally, he advises young practitioners to find a niche market and avoid directly competing with well-funded organizations.

  • 00:50:00 In this section, Dr. Ernest Chan discusses capital allocation and how machine learning can provide a more sophisticated expected return as an input to the capital asset allocation model. He questions the contradiction of using past performance as the expected return, which doesn't guarantee future success. Machine learning can also provide a nuanced understanding of the market with varying probabilities every day, unlike classical statistics which only provide static probability of distribution. When it comes to deep learning methods like recurrent convolutional neural networks, Dr. Chan thinks they may not be useful for non-time series inputs and feature selection.

  • 00:55:00 In this section, Dr. Ernest Chan discusses the limitations of deep learning in creating diverse cross-sectional features necessary for making successful predictions requiring domain expertise. He also provides his opinion on the place of reinforcement learning in financial models on a variety of time scales. He believes reinforcement learning might work at a very high frequency for high-frequency trading as it can react to people placing orders on the order book, but it fails in longer time scales. Finally, he recommends his company PredictNow.ai as a great resource for no-code financial machine learning for those interested in the expertise of someone like himself.
Financial Machine Learning - A Practitioner’s Perspective by Dr. Ernest Chan
Financial Machine Learning - A Practitioner’s Perspective by Dr. Ernest Chan
  • 2020.11.12
  • www.youtube.com
QUANTT and QMIND came together to offer a unique experience for those interested in Financial Machine Learning (ML). Unifying these two clubs is Dr. Ernest C...
 

Trading with Deep Reinforcement Learning | Dr Thomas Starke


Trading with Deep Reinforcement Learning | Dr Thomas Starke

Dr. Thomas Starke, an expert in the field of deep reinforcement learning for trading, delivered an insightful presentation and engaged in a Q&A session with the audience. The following is an extended summary of his talk:

Dr. Starke began by introducing deep reinforcement learning for trading, highlighting its ability to enable machines to solve tasks without direct supervision. He used the analogy of a machine learning to play a computer game, where it learns to make decisions based on what it sees on the screen and achieves success or failure based on its chain of decisions.

He then discussed the concept of a Markov decision process in trading, where states are associated with market parameters, and actions transition the process from one state to another. The objective is to maximize the expected reward given a specific policy and state. Market parameters are crucial in helping the machine make informed decisions about the actions to take.

The decision-making process in trading involves determining whether to buy, sell, or hold based on various indicators that inform the system's state. Dr. Starke emphasized the importance of not solely relying on immediate profit or loss labels for each state, as it can lead to incorrect predictions. Instead, the machine needs to understand when to stay in a trade even if it initially goes against it, waiting for the trade to revert back to the average line before exiting.

To address the difficulty of labeling every step in a trade's profit and loss, Dr. Starke introduced retroactive labeling. This approach uses the Bellman equation to assign a non-zero value to each action and state, even if it does not result in immediate profit. This allows for the possibility of reversion to the mean and eventual profit.

Deep reinforcement learning can assist in making trading decisions based on future outcomes. Traditional reinforcement learning methods build tables based on past experiences, but in trading, the number of states and influences is vast. To handle this complexity, deep reinforcement learning utilizes neural networks to approximate these tables, making it feasible without creating an enormous table. Dr. Starke discussed the importance of finding the right reward function and inputs to define the state, ultimately enabling better decision-making for trading.

The significance of inputs in trading was highlighted, emphasizing their need to have predictive value. Dr. Starke stressed the importance of testing the system for known behavior and selecting the appropriate type, size, and cost function of the neural network based on the chosen reward function. He explained how gamification is employed in trading, where historical and current prices, technical guard data, and alternative data sources constitute the state, and the reward is the profit and loss (P&L) of the trade. The machine retroactively labels observations using the Bellman equation and continually updates tables approximated by neural networks to improve decision-making.

Regarding training with reinforcement learning, Dr. Starke discussed different ways to structure the price series, including randomly entering and exiting at various points. He also addressed the challenge of designing a reward function and provided examples such as pure percentage P&L, profit per tick, and the Sharpe ratio, as well as methods to avoid long haul times or drawdowns.

In terms of inputs for trading, Dr. Starke mentioned numerous options, including open-high-low-close and volume values, candlestick patterns, technical indicators like the relative strength index, time of day/week/year, and inputting prices and technical indicators for other instruments. Alternative data sources such as sentiment or satellite images can also be considered. The key is to construct these inputs into a complex state, similar to how input features are used in computer games to make decisions.

Dr. Starke explained the testing phase that the reinforcement learner must undergo before being used for trading. He outlined various tests, including clean sine waves, trend curves, randomized series with no structure, different types of order correlations, noise in clean test curves, and recurring patterns. These tests help determine if the machine consistently generates profits and identify any flaws in the coding. Dr. Starke also discussed the different types of neural networks used, such as standard, convolutional, and long short-term memory (LSTM). He expressed a preference for simpler neural networks that meet his needs without requiring excessive computational effort.

Dr. Starke then delved into the challenges of using reinforcement learning for trading. He acknowledged the difficulty of distinguishing between signal and noise, particularly in noisy financial time series. He also highlighted the struggle of reinforcement learning to adapt to changes in market behavior, making it challenging to learn new behaviors. Additionally, he mentioned that while reinforcement learning requires a significant amount of training data, market data is often sparse. Overfitting is another concern, as reinforcement learning tends to act on basic market patterns and can easily overfit. Building more complex neural networks can mitigate this issue, but it is a time-consuming task. Overall, Dr. Starke emphasized that reinforcement learning is not a guaranteed solution for profitable outcomes, and it is crucial to have market experience and domain-specific knowledge to achieve success in trading.

During the Q&A session, Dr. Starke addressed various questions related to trading with deep reinforcement learning. He clarified that the Bellman equation does not introduce look-ahead bias and discussed the potential use of technical indicators as inputs after careful analysis. He also explored the possibility of utilizing satellite images for predicting stock prices and explained that reinforcement trading can be performed on small time frames depending on the neural network calculation time. He cautioned that reinforcement trading algorithms are sensitive to market anomalies and explained why training random decision trees using reinforcement learning does not yield meaningful results.

Dr. Starke recommended using neural networks for trading instead of decision trees or support vector machines due to their suitability for the problem. He emphasized the importance of tuning the loss function based on the reward function used. While some attempts have been made to apply reinforcement learning to high-frequency trading, Dr. Starke highlighted the challenge of slow neural networks lacking responsiveness in real-time markets. He advised individuals interested in pursuing a trading career in the finance industry to acquire market knowledge, engage in actual trades, and learn from the experience. Lastly, he discussed the challenges of combining neural networks and options trading, recognizing the complexity of the task.

In conclusion, Dr. Thomas Starke provided valuable insights into trading with deep reinforcement learning. He covered topics such as the decision-making process in trading, retroactive labeling, the Bellman equation, the importance of inputs, testing phases, and challenges associated with reinforcement learning for trading. Through his talk and Q&A session, Dr. Starke offered guidance and practical considerations for leveraging deep reinforcement learning in the financial markets.

  • 00:00:00 Dr. Thomas Stark introduces deep reinforcement learning for trading, a topic that he has been interested in for several years. Reinforcement learning (RL) is a technique that allows a machine to solve a task without supervision, and it learns by itself what to do to produce favorable outcomes. He explains how a machine that wants to learn how to play a computer game would start in a gaming scenario and move from one step to the next while responding to what it sees on the screen. Finally, the game ends, and the machine achieves success or failure based on the chain of decisions it made.

  • 00:05:00 Dr. Thomas Starke discusses trading with deep reinforcement learning and explains the concept of a Markov decision process. In this process, a state is associated with a particular market parameter, and an action transitions the process from one state to the next. Depending on the transition, the agent either receives a positive or negative reward. The objective is to maximize the expected reward given a certain policy and state. In trading, market parameters are used to identify what state the agent is in and help it make decisions on what action to take.

  • 00:10:00 Dr. Thomas Starke discusses the decision-making process involved in trading, which involves deciding whether to buy, sell, or hold based on various indicators that inform the state of the system. The goal is to receive the best possible reward, which is the profit or loss of the trade. However, the traditional machine learning approach of giving a state a particular label, such as immediate profit or loss, can lead to incorrect labels if the trade goes against us in the immediate future. Therefore, the machine must understand when to stay in the trade even if it initially goes against us and have the conviction to wait until the trade reverts back to the average line to exit the trade.

  • 00:15:00 Dr. Thomas Starke discusses retroactive labeling and how it is used in reinforcement learning to address the difficulty of labeling every step in a trade's profit and loss. He explains that traditional machine learning labels every step in the trade, making it difficult to predict whether the trade may become profitable in the future if it experiences a loss. Retroactive labeling uses the Bellman equation to assign a non-zero value to each action and state, even if it does not produce immediate profit, allowing for a reversion to the mean and eventual profit.

  • 00:20:00 Dr. Thomas Starke explains how to use reinforcement learning to solve the problem of delayed gratification in trading. The Bellman equation is used to calculate the reward of an action, with "r" representing immediate reward and "q" representing cumulative reward. Gamma is a discount factor that assigns weight to future outcomes compared to previous outcomes. By using reinforcement learning, trading decisions are not solely based on immediate rewards but also on holding positions for higher future rewards. This allows for more informed decision-making compared to greedy decision-making.

  • 00:25:00 Dr. Thomas Starke discusses how deep reinforcement learning can help in making decisions for trading based on future outcomes. Traditional reinforcement learning involves building tables based on past experiences, but in trading, this becomes complex due to the large amount of states and influences. Therefore, the solution is to use deep reinforcement learning and neural networks to approximate these tables without creating an enormous table. He explains the implementation of using gamification of trading and finding the right reward function and inputs to define the state. Overall, the use of deep reinforcement learning can help in decision-making for trading.

  • 00:30:00 In this section, Dr. Starke discusses the importance of inputs in trading and how they need to have some sort of predictive value, or else the system won't be able to make good trading decisions. He emphasizes the need to test the system for known behavior and choose the appropriate type, size, and cost function of the neural network, dependent on the reward function chosen. He then explains how gamification works in trading, where the state is historical and current prices, technical guard data, and alternative data sources, and the reward is the P&L of the trade. The reinforcement learner will use the Bellman equation to label observations retroactively, and through constant updating of tables approximated by neural networks, the machine will learn to make better and better trading decisions.

  • 00:35:00 In this section, Dr. Thomas Starke discusses how to structure the price series for training using reinforcement learning. He explains that instead of running through the price series sequentially, you can randomly enter and exit at different points, and it's up to the user to decide which method to choose. He also discusses the difficulty of designing a reward function, and provides various examples and methods to structure a reward function that can be used for training, such as using pure percentage P&L, profit per tick, the Sharpe ratio, and different types of punishments to avoid long haul times or drawdowns.

  • 00:40:00 According to Dr. Thomas Starke, we have many options, including open high low close and volume values, candlestick patterns, technical indicators like the relative strength index, time of day/week/year, different time granularities, inputting prices and technical indicators for other instruments, and alternative data like sentiment or satellite images. These inputs are then constructed into a complex state, similar to how a computer game uses input features to make decisions. Ultimately, the key is to find the right reward function that works for your trading style and to optimize your system accordingly.

  • 00:45:00 Dr. Thomas Starke explains the testing phase that his reinforcement learner must undergo before being used to trade in the financial markets. He applies a series of tests including clean sine waves, trend curves, randomized series with no structure, different types of order correlations, noise in clean test curves, and recurring patterns to determine if the machine makes consistent profits and to find flaws in the coding. He also discusses the different types of neural networks he uses, including standard, convolutional, and long short term memory (LSTM), and his preference for simple neural networks, as they are sufficient for his needs and don't require excessive computational effort.

  • 00:50:00 In this section, Dr. Thomas Starke discusses the challenges of trading with reinforcement learning, including the difficulties of distinguishing between signal and noise and the problem of local minima. He shows that reinforcement learning struggles with noisy financial time series and dynamic financial systems with changing rules and market regimes. However, he also shows that smoothing the price curve with a simple moving average can significantly improve the performance of the reinforcement learning machine, providing insight into how to build a successful machine learning system that can make profitable trading decisions.

  • 00:55:00 In this section, Dr. Thomas Starke discusses the challenges of using reinforcement learning for trading. Firstly, reinforcement learning struggles to adapt to changes in market behavior, making it challenging to learn new behaviors. Additionally, a lot of training data is needed, but market data is often sparse. While reinforcement learning is efficient, it can overfit easily and only really acts on basic market patterns. Building more complex neural networks can overcome this, but it's a time-consuming task. Ultimately, reinforcement learning is not a silver bullet for producing profitable outcomes, and it's important to have good market experience and domain-specific knowledge to achieve successful trading outcomes. Dr. Starke offers a Quant NC lecture and encourages anyone interested in coding these systems to contact him on LinkedIn with well-formulated questions.

  • 01:00:00 Dr. Thomas Starke answers various questions related to trading with deep reinforcement learning. He explains that the Bellman equation does not introduce look-ahead bias, and technical indicators can sometimes be used as inputs after careful analysis. Satellite images could be useful for predicting stock prices, and reinforcement trading can be done on small time frames depending on neural network calculation time. He also discusses how sensitive reinforcement trading algos are to market anomalies, and explains why it doesn't make sense to train random decision trees using reinforcement learning.

  • 01:05:00 In this section, Dr. Thomas Starke recommends using neural networks for trading rather than decision trees or support vector machines due to their suitability for the problem. He explains that tuning the loss function based on the reward function used is essential. He mentions that people have tried to use reinforcement learning for high-frequency trading but ended up with slow neural networks that lacked responsiveness in real-time markets. He suggests that gaining market knowledge will significantly help pursue a trading career in the finance industry, making actual trades, and learning a lot in the process. Finally, he discusses whether one can use neural networks to get good results with options trading and explains the challenges of combining neural networks and options trading.

  • 01:10:00 In this section, Dr. Thomas Starke discusses how options data can be used as an input for trading the underlying instrument, as opposed to just relying on technical indicators. He also answers questions about using neural networks to decide the number of lots to buy or sell and how to incorporate spread, commission, and slippage into the algorithm by building a model for slippage and incorporating those factors into the reward function. He advises caution when using neural networks to decide on trade volumes and recommends using output values to size portfolio weights accordingly. He concludes by thanking the audience for their questions and for attending his talk.
Trading with Deep Reinforcement Learning | Dr Thomas Starke
Trading with Deep Reinforcement Learning | Dr Thomas Starke
  • 2020.09.23
  • www.youtube.com
Dr. Thomas Starke Speaks on Trading with Deep Reinforcement Learning (DRL). DRL has successfully beaten the reigning world champion of the world's hardest bo...
 

Harrison Waldon (UT Austin): "The Algorithmic Learning Equations"


Harrison Waldon (UT Austin): "The Algorithmic Learning Equations"

Harrison Waldon, a researcher from UT Austin, presented his work on algorithmic collusion in financial markets, focusing on the interaction and potential collusion of reinforcement learning (RL) algorithms. He addressed the concerns of regulators regarding autonomous algorithmic trading and its potential to inflate prices through collusion without explicit communication.

Waldon's research aimed to understand the behavior of RL algorithms in financial settings and determine if they can learn to collude. He utilized algorithmic learning equations (ALEs) to derive a system of ordinary differential equations (ODEs) that approximate the evolution of algorithms under specific conditions. These ALEs were able to validate collusive behavior in Q-learning algorithms and provided a good approximation of algorithm evolution, demonstrating a large basin of attraction for collusive outcomes.

However, there are challenges in calculating the stationary distribution and distinguishing true collusion from rational self-preserving behavior. Numerical difficulties arise in determining the stationary distribution, and it remains a challenge to differentiate genuine collusion from behavior driven by self-interest.

Waldon highlighted the limitations of static game equilibrium when applied to dynamic interactions, emphasizing the need for a comprehensive approach to regulating behavior. Collusive behavior facilitated by algorithms without direct communication between parties requires careful consideration. The talk concluded with Waldon expressing his gratitude to the attendees, marking the end of the spring semester series.

  • 00:00:00 In this section, Harrison Walden from UT Austin discusses his recent work on algorithmic collusion in the financial industry. He notes that the majority of trades in electronic markets are executed by algorithms, many of which use machine learning techniques such as reinforcement learning (RL) to learn trading strategies. While RL has seen practical success in deep hedging and optimal execution, regulators have expressed concerns over firms relying on entirely autonomous algorithmic trading as it can lead to task collusion and inflated prices in the market without explicit communication. Walden's work aims to provide tools for studying algorithmic collusion and its potential impact on the financial industry.

  • 00:05:00 In this section, Harrison Waldon discusses the limitations of existing studies on the behavior of reinforcement learning algorithms in financial settings. While some experimental evidence shows that certain reinforcement learning algorithms can learn complex collusive strategies in certain scenarios, such experiments lack theoretical rigor. Additionally, the lack of explainability of RL algorithms is a concern for organizations such as the AFM, particularly in multi-agent and non-stationary environments such as financial markets. The main questions that drive Waldon's work are: How can we understand the behavior of interacting reinforcement learning algorithms in financial settings, and can these algorithms provably learn to collude?

  • 00:10:00 In this section, Harrison Waldon explains that in reinforcement learning (RL), the function F, or the learning rule, takes old parameter values, current state, current action, and possibly the subsequent state to incorporate the information into a new set of parameters. The goal is to find a set of parameters that approximate the optimal policy. One popular algorithm in RL is asynchronous tabular Q-learning, which associates one parameter value to each state-action pair and updates them in an asynchronous fashion. Q-learning works well in single-agent settings but becomes more challenging in multi-agent and non-stationary settings, common in markets. The state space in finance is defined as the vector of other agents' posted prices, where the action space may include buying, selling, or holding.

  • 00:15:00 In this section, we learn that one interesting finding in the team's work is that algorithms can learn sunspot trading if they start conditioning on irrelevant market factors. The speaker explains how Q learning behaves in interaction with other Q learners and defines the multi-agent analog of a Markov decision process as a stochastic game. They discuss how agents are learning and adapting their policies through time, making the true dynamics of the state process non-stationary, even though the transition function might be fixed. The main example used in the talk is the prisoner's dilemma, interpreted as a stylized market with two competing liquidity providers.

  • 00:20:00 understanding of the behavior of algorithms learning to play a repeated prisoner's dilemma game with other algorithm-equipped players. To achieve this, the state process must be given a notion of state and the system we arrive at is called the algorithmic learning equations. To derive this system, they use the ode method from stochastic approximation to approximate the evolution of parameters, allowing for a direct analysis of policies. While limitations exist in this model, the presented tools are general and can address these limitations.

  • 00:25:00 In this section, Harrison Waldon discusses the algorithmic learning equations, which approximate the evolution of parameters by utilizing classical stochastic approximation and an ODE. By conditioning the learning rule with respect to the stationary distribution and inducing fixed parameters, they derive a system of ODEs that mimic the learning algorithm. These algorithmic learning equations can approximate the evolution of the algorithm given certain conditions such as non-degenerate learning rates and compactness of the parameters. The stationary distribution and policies are Lipschitz continuous, which also proves crucial. The stochastic approximation used is necessary due to the non-stationary process with changing dynamics.

  • 00:30:00 In this section, Harrison Waldon discusses the Algorithmic Learning Equations and their properties. The Q-learning example discussed in the video satisfies all the properties of these equations, including maintaining parameters in a compact set, ergodic Markov chains, Lipschitz continuity in policies and learning rules. Waldon shows that under appropriate time scaling, the algorithm will be close to the solutions of the ODE for any finite time horizon with high probability and converge to locally asymptotically stable solutions almost surely if the learning rate decays fast enough. Waldon concludes by validating these equations by applying them to repeated prisoner's dilemma using Q-learning with softmax action selection.

  • 00:35:00 In this section, the conditions necessary for the algorithmic learning equation to approximate the evolution of the algorithms are discussed. The state process condition for ergodicity is satisfied immediately for this scenario as there is only one state. Trajectories of the learning algorithm are simulated with both large and small learning rates, which shows that the approximation by the algorithmic learning equation is good with small learning rates. The ALEs are also useful in analyzing the probability of obtaining a collusive outcome, with a large Basin of Attraction leading to such an outcome. In the next part of the video, each agent is given the ability to condition her spreads on the spreads of her opponent from the previous period.

  • 00:40:00 In this section of the video, Harrison Waldon explains the probability of playing a certain action and the source of noise in the simulations they are analyzing. He discusses the ergotic state process, the stationary distribution of policies, and how to interpret the components of each agent's policies in terms of punishment to examine the frequency with which a set of policies will induce the collusive outcome. He also provides plots of the algorithmic learning equations for State dependent Q learning for a range of initial conditions until the trajectory numerically converges.

  • 00:45:00 In this section of the video, Harrison Waldon discusses the results of using Q-learning with the Algorithmic Learning Equations to learn collusion behavior in stochastic games. The agents were able to learn to play collusive spreads almost 100% of the time, even if they did not start with high collusion probabilities. The results also showed a large basin of attraction for the collusive outcome, but also unexpected behaviors such as flipping back and forth between mutually collusive and mutually competitive outcomes. The methodology used in this study provided minimally restrictive sufficient conditions that allowed for the behavior of a wide class of state-dependent reinforcement learning algorithms to be approximated. However, there were some limitations due to numerical difficulties in calculating the stationary distribution. Overall, Q-learning was successful in learning collusive behavior in these stochastic games.

  • 00:50:00 In this section, Harrison Waldon explains how to guarantee that the algorithmic learning equations approximate the asymptotic behavior of the algorithm by showing the existence of a Lyapunov function, which is difficult due to the need to deal with the stationary distribution. To address this issue, Waldon introduces a new algorithm that is a generalization of classical fictitious play called State Dependent Smooth Fictitious Play. This algorithm assumes that all agents in a system play according to stationary policies, and the beliefs of those strategies are formed via the empirical frequency of play. The algorithm adds some randomness into the system and takes actions according to a soft-max distribution to get around the issues of deterministic learning rules.

  • 00:55:00 In this section, Harrison Waldon explains that the algorithmic learning equations can be used to analyze a continuous time system and guarantee that the smooth fictitious play algorithm will converge to rest points of the system, some of which may be collusive strategies. As the discount factor grows, the probability of learning collusive outcomes increases. Waldon also discusses the need for more realistic market dynamics and the possibility of applying algorithmic learning equations to deep reinforcement learning algorithms for studying equilibria and prices. Finally, he acknowledges the difficulty of collusion detection and the challenge of distinguishing between true collusion and rational self-preserving behavior.

  • 01:00:00 In this section, Harrison Waldon discusses how the equilibrium of a static game is a narrow reflection of the reality of dynamic interaction between people. He emphasizes the need for a holistic approach when considering what equilibrium behavior to regulate, especially in terms of collusive behavior that may be seen as rational and arrive through algorithms without direct communication between parties. The session ends with Waldon thanking the attendees and concluding the spring semester series.
Harrison Waldon (UT Austin): "The Algorithmic Learning Equations"
Harrison Waldon (UT Austin): "The Algorithmic Learning Equations"
  • 2023.04.26
  • www.youtube.com
Abstract: Recently there has been concern among regulators and legal theorists about the possibility that pricing algorithms can learn to collude with one a...
 

Irene Aldridge (AbleBlox and AbleMarkets): "Crypto Ecosystem and AMM Design"



Irene Aldridge (AbleBlox and AbleMarkets): "Crypto Ecosystem and AMM Design"

Irene Aldridge, the Founder and Managing Director of AbleMarkets, delves into various aspects of blockchain technology, automated market making (AMM), and the convergence of traditional markets with the world of AMMs. She emphasizes the significance of these topics in finance and explores potential challenges and solutions associated with them.

Aldridge begins by providing an overview of her background in the finance industry and her expertise in microstructure, which focuses on understanding market operations. She highlights the increasing adoption of automated market making models, initially prominent in the crypto market but now extending to traditional markets. She outlines the structure of her presentation, which covers introductory blockchain concepts, the application of blockchain in finance and programming, and real-world case studies of market making and its impact on traditional markets.

Exploring blockchain technology, Aldridge describes it as an advanced database where each row carries a cryptographic summary of the preceding row, ensuring data integrity. She explains the mining process involved in blockchain, where proposed content is validated and added to the chain, leading to greater transparency and decentralization in paperwork and payment systems.

Aldridge discusses the shift toward decentralization in the crypto ecosystem, highlighting the trade-off between privacy and the robustness of having multiple copies of the database on servers. She explains the blockchain process, from defining blocks and creating cryptographic signatures to the core innovations of proof of work and mining, which ensure security against hacking attempts.

However, Aldridge acknowledges the challenges associated with the proof of work mining system, including the increasing cost of mining, a decreasing number of miners, and potential vulnerabilities. She highlights alternative solutions, such as Ethereum's block aggregation and Coinbase's elimination of riddles for mining.

The speaker moves on to explore staking in the crypto ecosystem, where stakeholders commit their funds to support the network's operations. She acknowledges the potential issue of crypto oligarchs manipulating the market and explains how off-chain validation and automated market making have been implemented to counter this problem. Aldridge emphasizes the importance of understanding these concepts to grasp the significance of automated market making in preventing manipulation in the crypto market.

Aldridge delves into the principles behind Automated Market Makers (AMMs), emphasizing their revolutionary impact on cryptocurrency trading. She explains how AMM curves, shaped by liquidity-related invariants, determine prices based on the remaining inventory in the liquidity pool. She highlights the benefits of AMMs, including 24/7 liquidity, formulaic slippage estimation, and fair value determination through convex curves. However, she also mentions that AMMs can face losses in volatile conditions, leading to the introduction of transaction fees.

Comparing AMMs to traditional markets, Aldridge discusses the advantages of automated market making, such as continuous liquidity, predictable slippage, and fair value determination. She explains the constant product market making method employed by UniSwap, illustrating how execution brokers can select platforms for liquidity and execution based on parameterized data.

The speaker discusses the calculation of volume changes and the distinction between public and private liquidity pools. She presents empirical examples using Bitcoin and Ethereum from different exchanges, pointing out differences in their curves and suggesting potential concerns with certain platforms.

Aldridge emphasizes the importance of designing AMM curves using convex shapes to ensure market stability. She explains the roles of liquidity providers and traders in the system and how they benefit from transaction fees. She also raises the possibility of AMM systems being used in traditional markets, prompting consideration of their application to assets like IBM stock.

Aldridge explores the convergence of traditional markets with automated market making, noting that traditional market makers are already implementing similar systems. She highlights the expected changes in market interactions, trading strategies, execution methods, and transparency. The influence of automated market makers on microstructure in the markets is also discussed.

Addressing the feasibility of implementing automated liquidity in 24/7 trading environments like the crypto market, Aldridge explains that automated market making can eliminate risks associated with traditional market making methods and that the technology is readily available. However, she cautions that not all crypto exchanges utilize automated market making, emphasizing the need for research to address risk management and externalities. Aldridge points out that automated market making technology emerged around the same time as cryptocurrencies like Bitcoin in 2002.

When questioned about the potential unfair advantage of automated market making dealers having access to private information, Aldridge acknowledges that it poses a problem. However, she suggests that shopping around and quantifying the automated market making curve across different platforms can help mitigate this issue. She notes that miners are incentivized to continue their work because they are the ones who benefit from accessing and validating order blocks. Nevertheless, unless there is a private incentive, it is increasingly challenging to generate profits in this space, leading to the formation of oligopolies. Aldridge proposes that insurance could serve as a natural incentive for miners to work almost for free. However, insurance companies perceive blockchain as a major threat to their industry, resulting in resistance to such system designs. She also addresses the possibility of fraud schemes, highlighting potential manipulation in the IBM curve.

In the context of centralized limit order books, Aldridge explains how market participants are utilizing automated market making models, such as AMMs, which provide liquidity in a cost-effective and automated manner, potentially resulting in profits. However, distinguishing between traders using AMMs and those manually placing limit orders remains a challenge. Aldridge suggests that identifying malicious users through microstructural data analysis could offer a potential solution. She believes that if AMMs continue to dominate the market, a more efficient and streamlined model will emerge.

In summary, Irene Aldridge's discussion covers various aspects of blockchain technology, automated market making, and the convergence of traditional markets with the AMM world. She explores the basics of blockchain, discusses the challenges and potential solutions related to proof of work mining systems, and highlights the benefits of AMMs over traditional markets. Aldridge also addresses concerns regarding the feasibility of implementing automated liquidity, the issue of automated market making dealers having access to private information, and the potential role of insurance as an incentive for miners. Through her insights, she provides valuable perspectives on the current landscape and future possibilities in the world of finance and automated market making.

  • 00:00:00 In this section, Irene Aldridge discusses her background in the finance industry and her interest in microstructure, which focuses on how markets operate. She then introduces the topic of automated market making and how it has originated in the crypto market but is now being deployed in traditional markets. She provides an outline for the presentation, which includes a blockchain 101 introduction, blockchain applications in finance, programming, and case studies of market making in practice and its spillovers to traditional markets. Aldridge has a background in electrical engineering and has worked in various areas in the finance industry, including trading, risk management, and research.

  • 00:05:00 In this section, Irene Aldridge explains the basics of blockchain technology. She describes it as a fancy database where each row carries a cryptographic summary of the previous row, making it computationally difficult to change any previous data. Additionally, she discusses the mining process of blockchain and how it involves examining the proposed content of a block and committing it to memory. Aldridge believes that blockchain can help move paperwork and payments to blockchain, allowing for more transparency and decentralization.

  • 00:10:00 In this section, Irene Aldridge discusses the move towards a decentralized model in the crypto ecosystem, where transactions are public and stored on multiple servers rather than centralized on an Oracle server. While this means privacy is sacrificed, the increased robustness of having multiple copies of the database on servers is seen as a fair trade-off. Aldridge explains that the blockchain process is relatively straightforward, starting with defining a block and creating a crypto signature or hash, which is then encoded into the next block. The core innovations of proof of work and mining procedures are then discussed, with the aim of ensuring security against hacking attempts by making the computational complexity of recalculating the chain too great.

  • 00:15:00 In this section, Irene Aldridge discusses the issues plaguing the proof of work mining system in cryptocurrency. She explains that the cost of mining is becoming too expensive for most people, leading to an equilibrium where only a specific group of individuals can afford the costs and the rest are unable to mine. Additionally, the number of miners is decreasing over time, making the system vulnerable to potential hacks. The strength of the decentralized model is that the longest chain is automatically selected by the core engine, preventing colluders from catching up and introducing hacked blocks into the system. However, there are growing concerns about the proof of work system, including issues of interest between miners who trade and mine, and the time it takes to mine blocks. There are now new solutions being developed, such as Ethereum's aggregation of blocks every 12 seconds, and Coinbase's decision to stop requiring people to solve riddles to mine.

  • 00:20:00 In this section, the speaker discusses the process of staking in the crypto ecosystem, which involves putting money into the system to mine. Stakeholders can lock up their stake or collateral for a specific period, and if there is fraudulent activity, they pay for it with their stake. However, this creates an oligopoly of crypto oligarchs who manipulate the market. To combat this, off-chain validation and automated market making have been used. The latter has become more popular in the crypto ecosystem and has various open-source products that anyone can access, making it easier to understand. The speaker highlights that understanding the background information, such as staking and off-chain validation, is essential to understanding the importance of automated market making and how it works to prevent manipulation in the crypto market.

  • 00:25:00 this section, Irene Aldridge discusses the principles behind different Automated Market Makers (AMMs), which have revolutionized the world of cryptocurrency trading. She explains that AMM curves, which vary in curvature and offset, are shaped by an invariant related to liquidity, and that the price is the function of the remaining inventory in the liquidity pool. One benefit of AMMs is that they can trade 24/7 without a spread, as there are no bid-ask spreads, and can automatically adjust to changing market conditions. However, AMMs can lose money in volatile conditions, so they put transaction fees, which traditional markets do not have.

  • 00:30:00 In this section, Irene Aldridge discusses automated market making (AMM) and its benefits over traditional markets, such as continuous 24/7 liquidity, formulaic slippage that can be estimated ahead of time, and fair value through the use of a convex curve. Aldridge explains the constant product Market making method used by popular system UniSwap, which follows a convex curve between quantity one and quantity two. By collecting data from different exchanges and parameterizing it based on this constant product method, Aldridge highlights how execution brokers can determine which platforms to choose for liquidity and execution.

  • 00:35:00 In this section, Irene Aldridge discusses the calculation of respective changes in volume and currency and simulates a very simple simulation using the girl tick rule from microstructure to determine whether the volume is a buy or a sell. She also explains the two types of liquidity pools, public and private, and the arbitrage that goes on between them, emphasizing that there should be no difference between them on sufficiently liquid platforms. Aldridge then presents empirical examples using Bitcoin and Ethereum from various exchanges, such as Bitfinex and Bitstamp, and highlights their curves, pointing out that FTX resembles nothing like what we would expect from an automated Market Making perspective and suggests that it might have been a Ponzi scheme all along.

  • 00:40:00 In this section, Irene Aldridge discusses the design of automated market making (AMM) curves and compares them to examples from various cryptocurrency exchanges. She highlights the importance of using convex curves in AMM design to ensure market stability and avoid drastic price increases when inventory is bought out. Additionally, she explains the roles of liquidity providers and traders in the system and how they benefit from the transaction fees. Aldridge also mentions rumors of AMM systems being used in traditional markets and emphasizes the need to consider how this design would work for products such as IBM stock.

  • 00:45:00 In this section, Irene Aldridge discusses the convergence of traditional markets and the automated market making world, where traditional market makers are already deploying similar systems. She points out that there are a lot of changes expected in how individuals interact with the markets, how training strategies are built, how execution is carried out, and how transparent everything becomes. She also notes that microstructure is changing in the markets due to the influence of automated market makers. Irene provides a basic understanding of how the daily IBM data is used to estimate AMM curves, and how more granular data would make it easier to obtain cleaner estimates.

  • 00:50:00 In this section, Irene Aldridge discusses the feasibility of implementing automated liquidity in 24/7 trading environments like the crypto market, where traditional market making methods may not be as effective. She explains that automated market making can eliminate risks associated with traditional market making methods and that the technology is widely available. However, she warns that not all crypto exchanges use automated market making and that research is needed to address risk management and externalities. She also notes that this technology has been around since 2002 and coincided with the emergence of cryptocurrencies like Bitcoin. When asked about the potential unfair advantage of automated market making dealers having access to private information, Aldridge notes that this is an open problem that requires further research.

  • 00:55:00 In this section, Irene Aldridge discusses how AMM dealers or those who take coins and use automated market making systems (AMMs) do see order flow before others, which presents a problem. However, as there are many platforms available, shopping around and quantifying the automated market making curve can help mitigate this issue. Irene also notes that due to this problem, miners are motivated to keep going, as they are the only ones who benefit from looking into and validating order blocks. However, unless there is a private incentive, it is becoming increasingly difficult to make money in this space, leading to the formation of oligopolies. Irene suggests that insurance could be a natural incentive for miners to benefit from and work for almost free. However, insurance companies see blockchain as a major threat to their existence, so there is resistance to this kind of system design. Lastly, Irene addresses a question on the possibility of a fraud scheme, stating that there could be one in the IBM curve, where one could argue that the bottom is being manipulated.

  • 01:00:00 In this section, Irene Aldridge discusses the use of automated market making models in centralized limit order books. Market participants are utilizing their own AMMs as it is low cost and automated, providing liquidity for the market with the potential to make a profit. Despite this, it is currently difficult to differentiate between traders using an AMM and those placing limit orders manually. Aldridge suggests that identifying bad actors through the microstructure data may be an open problem, but if AMMs continue to dominate the market, a more streamlined model will emerge.
Irene Aldridge (AbleBlox and AbleMarkets): "Crypto Ecosystem and AMM Design"
Irene Aldridge (AbleBlox and AbleMarkets): "Crypto Ecosystem and AMM Design"
  • 2023.03.29
  • www.youtube.com
Abstract: Assets on blockchain trade 24x7 with very thin liquidity. This demands new fully automated processes, including Automated Market Making (AMM). We d...
 

Agostino Capponi (Columbia): "Do Private Transaction Pools Mitigate Frontrunning Risk?"


Agostino Capponi (Columbia): "Do Private Transaction Pools Mitigate Frontrunning Risk?"

Agostino Capponi, a researcher from Columbia University, delves into the issue of front running in decentralized exchanges and proposes private transaction pools as a potential solution. These private pools operate off-chain and separate from the public pool, ensuring that validators committed to not engaging in front running handle them. However, Capponi acknowledges that using private pools carries an execution risk since not all validators participate in the private pool, which means there's a possibility that transactions may go unnoticed and remain unexecuted. It's important to note that the adoption of private pools might not necessarily reduce the minimum priority fee required for execution. Furthermore, Capponi points out that the competition between front-running attackers benefits validators through maximal extractable value (MEV). Ultimately, while private pools can mitigate front-running risk, they may increase the fee needed for execution, leading to inefficiencies in allocation.

Capponi highlights the correlation between the proportion of transactions routed through private pools and the probability of being front-run, which complicates optimal allocation. He also explores different types of front-running attacks, including suppression and displacement attacks, and presents data showing the substantial losses incurred due to front running. To address these risks, Capponi suggests educating users on transaction timing and making transaction validation more deterministic to create a more equitable system.

The discussion touches on the dynamics of private transaction pools, the challenges of adoption, and the potential trade-offs involved. Capponi explains how private pools provide protection against front running but cautions that their effectiveness depends on the number of validators participating in the private pool. Additionally, he addresses the issue of validators not adopting private pools due to the loss of MEV, proposing potential solutions such as user subsidies to incentivize their adoption.

While private transaction pools can mitigate front-running risks to some extent, Capponi emphasizes that they are not foolproof and may not achieve optimal allocation. The complexity arises from factors such as the competition between attackers, the adoption rate of validators in private pools, and the resulting impact on execution fees. The discussion raises important considerations for the blockchain community in addressing front-running risks and ensuring a fair and efficient decentralized exchange environment.

  • 00:00:00 In this section, Agostino Capponi introduces the topic of decentralized exchanges and the front running risks they face. He explains that blockchain architecture works by having transactions submitted to a memory pool that is then accessed by validators who append the transactions to blocks and receive fees from users. Capponi notes that users can prioritize their transactions by offering higher fees, but this system can lead to front running. He introduces the concept of private pools as a potential solution to this problem and discusses how his team constructed a game theory model to test the effectiveness of these pools in mitigating front running.

  • 00:05:00 In this section, Agostino Capponi describes the problem of front running in public open access blockchains, which allows users to see and submit transactions. Front running attacks occur when users take advantage of actionable information about pending or executed transactions. Capponi explains the sandwich attack, where an attacker places a higher fee to execute the transaction before the user, causing the price to go up, and then executes a reverse transaction for a profit. Although there is a risk of failure if the attacker's fee is not high enough, attackers typically time their orders to increase their chances of success.

  • 00:10:00 In this section, Agostino Capponi discusses several types of front running attacks, including suppression attacks and displacement attacks, in which an attacker submits multiple transactions or displaces another user's transaction to execute their desired transaction first. Capponi questions whether front running is a material risk that limits blockchain adoption and presents a graph showing the number of front running attacks and the revenue generated from them from May 2020 to March 2021, which indicate a loss of approximately 10,000 ethereum, or $125 million, due to front running.

  • 00:15:00 In this section, Agostino Capponi discusses the issue of front running in Ethereum transactions and its associated costs, both direct and indirect. He explains that one solution to this problem is the use of private transaction pools, which are essentially off-chain, parallel channels that are separate from the public pool and can only be monitored by some validators. Transactions submitted to these private pools have zero front running risk as long as validators behave honestly, and if they are found to be front running transactions, they will be ejected from the pool. Overall, private transaction pools provide a good solution for those who are worried about front running and want to get their transactions executed without being front run.

  • 00:20:00 In this section, Agostino Capponi discusses the use of private transaction pools and whether they can mitigate the risk of frontrunning. Capponi explains that private pools are only visible to validators and are off-chain, meaning attackers cannot access them. This eliminates the possibility of being front run and provides guarantees, as validators are committed to not engage in front running. Capponi also addresses the issue of adoption and whether users will submit their transactions into the pool. Additionally, he mentions how attackers may still compete with each other, but the private pool can reduce the risk of underinvestment by arbitrage bots. Finally, he introduces a simple model with three agents to discuss whether adoption of private pools will be observed.

  • 00:25:00 In this section, Agostino Capponi discusses the concept of private transaction pools and whether or not they mitigate front-running risks. He explains that there are two possible venues for submitting transactions: the private pool and the public pool. After the validators have adopted which pool to monitor, the users will beat their priority fee and choose where they want to submit the transaction. The attackers will then scan for opportunities, submit transactions and decide where to submit them. Capponi emphasizes the importance of probability in detecting opportunities and the probabilities of achieving a successful front-run.

  • 00:30:00 In this section, the speaker explains the concept of private transaction pools and whether they can mitigate frontrunning risks. Private transaction pools can provide protection against frontrunning risks as only the validator who appends the block can see the transactions, preventing other arbitrators from identifying opportunities before the user. However, submitting through a private pool comes with an execution risk, as not all validators will be on the private pool, and there is a chance that the transaction may not be visible and, therefore, not executed. While private pool transactions have a priority over public pool transactions, the number of validators monitoring the private pool impacts the execution risk, making it a concern for users to consider before submitting a transaction through the private pool.

  • 00:35:00 In this section, Agostino Capponi explains that private transaction pools can mitigate front-running risks to some extent, but it is not foolproof. Attackers will engage in an arm race to gain priority in getting their order executed, and they can use both private and public pools to reduce execution risk but still receive prioritized execution. Meanwhile, users who can be front-run will decide whether to submit their transactions to the private pool or the public pool based on the adoption rate of validators in the private pool and the front-running cost. If the adoption rate is high, they will use the private pool to avoid being front-run, but if it is low, they may choose the public pool to avoid waiting too many blocks for execution despite the risk of being front-run.

  • 00:40:00 In this section, Agostino Capponi explains how private transaction pools can potentially mitigate front running risk. If a user submits to a private pool and all validators join that pool, front running risk is eliminated because there is no opportunity for arbitrage. However, in cases where front running risk is low, not all validators will adopt the private pool, meaning that the user may instead opt for the public pool, exposing themselves to front running risk again.

  • 00:45:00 In this section, Agostino Capponi discusses whether private transaction pools can mitigate front-running risk and reduce the minimum fee needed for execution. It is argued that front-running risk is only eliminated if the loss is large, and even then, some front-running losses cannot be eliminated. Additionally, the competition between front-running attackers benefits the validators or miners through maximal extractable value (MEV). The adoption of private pools may not necessarily reduce the minimum priority fee needed for execution as validators are only willing to adopt private pools if they can earn a higher fee. Moreover, the existence of a private pool may lead to more demand for block space, which would increase the fee needed for execution. Ultimately, private pools may not always reduce front-running risk but can increase the fee needed for execution, leading to allocative inefficiency.

  • 00:50:00 In this section, the speaker discusses the inefficiencies caused by front-running risks in blockchains. The first inefficiency results from users who may decide not to submit transactions due to the risk of being front-run, which can lead to a suboptimal allocation of transactions. The second inefficiency comes from front-running attacks, where an attacker executes a transaction before the victim, resulting in a transfer of value. To mitigate these risks, private transaction pools are proposed, which can increase the value of transactions and help users submit without fear of front-running. However, the adoption of private pools by all validators is not always attainable due to bad MEV and the resulting loss of revenue for validators.

  • 00:55:00 In this section, Agostino Capponi discusses the problem of validators not adopting the private pools, even though it would be a socially optimal outcome. The reason is that they would lose their MEV (Maximal Extractable Value) and without a benefit to them, they would not switch. The solution would be for the front runnable users to subsidize the validators by committing to pay them monetarily equivalent to what they would have saved in not being front run. The data indicates that if the competition for being the first to execute is fierce, the cost and revenue ratio of the attackers is substantially lower due to the adoption of flash bot private pools.
  • 01:00:00 In this section, Agostino Capponi discusses his research on private transaction pools and whether they mitigate the risk of frontrunning. He explains that while private pools can offer some mitigation for large frontrunning losses, they are not beneficial for attackers running arbitrage bots, as it can make their situation worse. The probability of being frontrun can be estimated by looking at the slippage that the transaction would incur compared to the price that needs to be paid to frontrun. Capponi points out that there is a positive correlation between the probability of being frontrun and the proportion of transactions that are being routed through private pools. He concludes that private pools can't achieve optimal allocation as not all validators monitor the pool, resulting in inefficiencies such as frontrunning risk or block space being allocated to a frontrunnable transaction.

  • 01:05:00 In this section, Agostino Capponi from Columbia University discusses the use of private transaction pools to avoid front-running risks in blockchain, particularly in Ethereum and Polygon. He also notes that currently, there is a monopolistic entity that receives all the provider rewards, and Ethereum is considering solutions such as the burning or redistribution of minor extractable value (MEV) to prevent this. Capponi also raises the controversial issue of weapons of math destruction in the context of blockchain transactions and the transfer of value from those who don't understand the formula to those who do.

  • 01:10:00 In this section, the speakers discuss the issue of front-running in private transaction pools and how it can impact unsophisticated users such as family offices. They suggest that in order to make the system more equitable, there needs to be a way to educate these users on how to time their transactions better to avoid being front-run. They also note that bots that engage in front-running are extremely sophisticated and use complex algorithms to determine the best way to execute transactions while generating the most profit. The speakers suggest that if the time of transaction validation were more deterministic, it would be easier for users to time their transactions better and reduce the risk of front-running.
Agostino Capponi (Columbia): "Do Private Transaction Pools Mitigate Frontrunning Risk?"
Agostino Capponi (Columbia): "Do Private Transaction Pools Mitigate Frontrunning Risk?"
  • 2023.01.25
  • www.youtube.com
Abstract: Blockchain users who submit transactions through private pools are guaranteed pre-trade privacy but face execution risk. We argue that private poo...
 

Dr. Kevin Webster: "Getting More for Less - Better A/B Testing via Causal Regularization"



Dr. Kevin Webster: "Getting More for Less - Better A/B Testing via Causal Regularization"

In this video, Dr. Kevin Webster delves into the challenges associated with trading experiments and causal machine learning, expanding on various key topics. One prominent issue he addresses is prediction bias in trading, where the observed return during a trade is a combination of price impact and predicted price move. To mitigate this bias, Dr. Webster proposes two approaches: the use of randomized trading data and the application of causal regularization. By incorporating the trading signal that caused a trade into the regression model, biases can be eliminated.

Dr. Webster introduces the concept of a causal graph, which involves three variables: the alpha of the trade, the size of the trade, and the returns during the trade. He asserts that accurately estimating price impact is challenging without observing alpha, and traditional econometrics techniques fall short in this regard. He highlights the limitations of randomized trading experiments due to their limited size and duration, emphasizing the need for careful experiment design and cost estimation using simulators.

To overcome the shortcomings of traditional econometrics, Dr. Webster advocates for causal regularization. This method, derived from Amazon, utilizes biased data for training and unbiased data for testing, resulting in low-bias, low-variance estimators. It leverages the wealth of organizational data available and corrects for biases, enabling more accurate predictions.

Estimating alpha without knowledge of its impact poses a significant challenge, especially when trade data lacks trustworthiness. Dr. Webster suggests the use of random submission of trades to obtain unbiased data without relying on pricing technology. However, this approach necessitates forgoing a large fraction of trades to establish a confidence interval on alpha, which may not be practical. Alternatively, he proposes leveraging causal machine learning to achieve similar results with less data. Causal machine learning proves particularly valuable in trading applications, such as transaction cost analysis, price impact assessment, and alpha research, surpassing traditional econometrics due to the availability of deep, biased trading data.

The speaker also delves into the significance of statistical analysis in A/B testing, emphasizing the need to define price impact and attach a statistical measure to combat prediction bias. Without addressing this bias, analysis becomes subjective and reliant on individual interpretation. Dr. Webster acknowledges the challenges posed by observational public data and highlights the insights gained from interventional data. Although answering the question of which approach to adopt is complex, A/B testing remains a common practice in the banking and brokerage industries.

Lastly, Dr. Webster briefly discusses the relationship between transfer learning and causal regularization. While both involve training a model on one dataset and applying it to another, transfer learning lacks a causal interpretation. The analogy between the two lies in their validation process, with cross-validation playing a pivotal role. Despite their mathematical similarities, Dr. Webster emphasizes the novelty of the causal interpretation in the approach.

  • 00:00:00 Kevin Webster talks about live trading experiments and causal machine learning. He describes a scenario where a hedge fund trades through a broker, who is responsible for executing the trade and ensuring best execution while proving that they acted in the best interest of the client. The broker faces difficulties as their clients don't trade randomly based on Alpha signals, and the observed return during a trade is a mix of price impact and predicted price move caused by the trade. Webster aims to address this issue using causal regularization and proposes a model that learns how the predicted price move is related to the order flow.

  • 00:05:00 The speaker discusses the distinction between alpha signals and price impact, which are two components of returns. Alpha signals predict price moves that would happen regardless of whether stocks are traded or not, while price impact describes price moves caused by trading. Traders use pricing back-knolls to simulate how prices would react to their trades and answer what-if scenarios. However, it is difficult to distinguish whether traders caused a price move or predicted it, leading to prediction bias. CFM's proprietary data and other techniques can help eliminate the biases and fix the prediction bias by factoring in the trading signal that caused a trade in the regression.

  • 00:10:00 In this section of the video, Dr. Kevin Webster discusses the issue of prediction bias in trading and how it affects both brokers and alpha researchers. He explains that while an alpha researcher may have the alpha signal, they might not have a good price impact model, which leads to an overestimation of alpha. Conversely, if brokers don't know the alpha, they will trade too slowly for the client. Dr. Webster proposes using randomized trading, which is expensive, or causal regularization, a method that combines both randomized trading data and historical data in an intelligent way to get a better performance than traditional econometrics. He concludes by stating that he will compare the performance of these methods using a simulation.

  • 00:15:00 Dr. Kevin Webster discusses three methods of econometric testing, stresses the importance of causal inference, and explains how it is already being actively used in the tech industry, particularly in the machine learning community. He further emphasizes how these businesses are utilizing causal machine learning to enable their teams to align quickly on the ground truth, eliminate reinvestigating surprising findings, avoid rerunning faulty experiments, and prevent second-guessing of crucial decisions. Dr. Webster's methods utilize a combination of causal and econometric testing, allowing for more accurate predictions based on five times less data.

  • 00:20:00 The author proposes a causal graph for his study which involves three variables: alpha of the trade, size of the trade, and returns during the trade. He assumes that the underlying features of his Alpha models drive the different fundamental price moves on the stock, and his trading algorithm reacts to Alpha signals, causing trades. He also assumes that trades cause price moves, known as price impact. According to Dr. Webster, no matter what fancy regression technique traders use, they won't be able to estimate price impact without observing Alpha. Traders can estimate price impact by randomizing it away, which is actively used in the finance industry and is known as randomized trading expense. However, its use is limited only to substantive orders because such randomizations are expensive.

  • 00:25:00 The speaker discusses the limitations of randomized trading experiments in comparison to observational data due to limited size and duration of experiments. For a reasonable set of parameters, the observational data set can be larger than the interventional data set, and traders must design experiments prior to deploying them due to the expense of making mistakes. The use of a simulator to determine the cost and confidence interval of the experiment before submitting random trades is crucial. Without considering the bias, ignoring Alpha will result in high bias and low variance.

  • 00:30:00 Dr. Kevin Webster explains the limitations of traditional econometrics and introduces the concept of causal regularization, which is a method that came from Amazon and involves using bias data as training data and unbiased data as testing data to tune meta parameters. The method ensures a low bias, low variance estimator, unlike traditional methods that only use a small amount of experimental data. Causal regularization algorithm allows using the large organizational data available and correcting for any biases to provide accurate estimates for traders.

  • 00:35:00 In this section of the video, Dr. Kevin Webster discusses the challenges of estimating Alpha without knowing the impact when there is no trust in trade data. He suggests a solution where trades are randomly not submitted to get unbiased data, which is model-free and does not require pricing technology. However, the downside is that a large fraction of trades need to be foregone for obtaining a confidence interval on Alpha, which might not be practical for traders. He then proposes a machine learning method to address this issue and obtain the same result with less data. Causal machine learning is applicable to trading applications such as transaction cost analysis, price impact, and Alpha research and outperforms traditional econometrics in trading data regimes because of the availability of deep, biased trading data.

  • 00:40:00 The speaker discusses the fundamental uncertainty involved in A/B testing and how statistical analysis plays a crucial role in finding the ground truth to be statistically significant but not on a trade-by-trade level. He emphasizes that defining price impact and attaching a statistical number to that definition can help combat prediction bias. However, without something to combat prediction bias, the analysis becomes subjective and depends on the eye of the beholder. Dr. Webster also discusses the challenges involved in observational public data and how interventional data can provide more insights into the analysis. He acknowledges that while it is a hard question to answer, A/B testing is a common transformation that many banks and brokers adopt.

  • 00:45:00 Dr. Kevin Webster briefly discusses the relationship between transfer learning and causal regularization. He notes that there is an analogy between the two, as both involve training a model on one dataset and hoping it works well on another dataset. While transfer learning lacks a causal interpretation, the proof for transfer learning works because of cross-validation, which applies to causal regularization as well. Despite the mathematical similarity, Dr. Webster asserts that the causal interpretation of the approach is quite novel.
Dr. Kevin Webster: "Getting More for Less - Better A/B Testing via Causal Regularization"
Dr. Kevin Webster: "Getting More for Less - Better A/B Testing via Causal Regularization"
  • 2022.11.09
  • www.youtube.com
Abstract: Causal regularization solves several practical problems in live trading applications: estimating price impact when alpha is unknown and estimating...
 

Yuyu Fan (Alliance Bernstein): "Leveraging Text Mining to Extract Insights"



Yuyu Fan (Alliance Bernstein): "Leveraging Text Mining to Extract Insights"

Yuyu Fan, a researcher at Alliance Bernstein, provides valuable insights into the application of natural language processing (NLP) and machine learning in analyzing earnings call transcripts and generating effective trading strategies.

Fan's team employed various techniques, including sentiment analysis, accounting analysis, and readability scoring, to screen over 200 features extracted from earnings call transcripts. They utilized advanced models like BERT (Bidirectional Encoder Representations from Transformers) to evaluate the sentiment of speakers, comparing the sentiment of CEOs with that of analysts. Interestingly, they found that analyst sentiment tends to be more reliable.

The analysis was conducted on both individual sections and combined sections of the transcripts, with the team discovering that a context-driven approach outperforms a naive approach based on background words. The sentiment signal, particularly for U.S. small-cap companies, performed well and was recommended by the investment teams.

In explaining the methodology, Fan describes how their team used quantile screening and backtesting to evaluate the performance of different features. They examined sentiment scores based on dictionary-based approaches as well as context-based approaches using BERT. The team also delved into readability scores, which measure the ease of understanding a text, focusing on CEO comments to identify potential correlations with company performance.

Fan provides insights into the working of BERT, highlighting its bi-directional encoder representation that captures contextual information from the left and right of a given word. The team fine-tuned the BERT model for sentiment analysis by adding sentiment labels through self-labeling and external datasets. Their findings indicated that BERT-based sentiment analysis outperformed dictionary-based sentiment analysis, as demonstrated by examples from earnings call transcripts.

Furthermore, Fan discusses the challenges of setting accuracy thresholds for sentiment analysis and emphasizes that practical performance may not significantly differ between accuracy levels. She highlights the success of their sentiment signal on U.S. small-cap companies, which led to its recommendation by the investment teams. Fan also mentions the publication of a paper detailing NLP features that could serve as quant signals for creating efficient trading strategies, with ongoing efforts to enhance the model through data augmentation.

The discussion expands to cover the correlation between NLP features and traditional fundamental and quantitative features, highlighting the moderate correlation observed for readability and sentiment accounting. Fan clarifies their return methodology, including the selection of companies based on the latest available information before rebalancing.

Towards the end, Fan touches upon topics such as CO2 arbitrage, the difference between BERT and FinBERT, and the development of a financial usage model for BERT specifically tailored to finance-related filings, earnings, and news. The process of converting audio data into transcripts for analysis is also mentioned, with the use of transcription services and vendor solutions.

In summary, Yuyu Fan's research showcases the power of NLP and machine learning techniques in analyzing earnings call transcripts. The application of sentiment analysis, accounting analysis, and readability scoring, along with the utilization of advanced models like BERT, enables the generation of efficient trading strategies. The context-driven approach outperforms naive approaches, and the sentiment signal proves valuable, particularly for U.S. small-cap companies, as recommended by Alliance Bernstein's investment teams.

  • 00:00:00 Yuyu Fan talks about using natural language processing (NLP) to analyze earnings call transcripts in finance. Companies use earnings calls to share financial and business information with the investment community, and analysts typically analyze the transcripts for information that may impact companies' performance and stock prices. However, manually analyzing transcripts for a large universe of companies is labor-intensive, which is where NLP and machine learning techniques come in. Such techniques have been proven to be efficient in analyzing financial documents and formulating efficient trading strategies. Yuyu Fan's research expands beyond the typical testing on US large caps to include different universe caps, including US small caps and emerging markets. Additionally, the analysis is done on individual sections as well as the combined sections of the transcripts, and a systematic comparison shows that the context-driven approach outperforms the background words naive approach.

  • 00:05:00 Yuyu Fan discusses the data used for their text mining analysis and explains the structure of earnings call transcripts, which are composed of two sections - the presentation and the Q&A section. They generated NLP features on each of these individual sections as well as the combined sections. The three categories of NLP features generated are sentiment, accounting, and readability scores. They also provide a simple back testing method for their analysis. The sentiment features are further divided into two categories, one based on a dictionary and the other on context.

  • 00:10:00 Yuyu Fan from Alliance Bernstein explains how they use text mining to screen over 200 features that are generated to find robust and good performing investment signals. They not only consider data mining, but also fundamental analysis and economic intuition, as well as prior research. They rank components by feature values and track monthly returns for each quantile to evaluate the performance. The first category is simple word count, and one of the features is analyst question word count, which generally has consistent performance with their prior expectation, except for emerging markets which have different behaviors. They evaluate the performance using basic metrics like annualized return and wall and find that this signal is just okay, not that good.

  • 00:15:00 The speaker explains the concept of readability scores and how her team uses them to analyze CEO comments. Readability scores are a metric used to measure how difficult it is to read and understand a text, taking into account the number of difficult words and sentence length. Higher scores mean the text is more difficult to understand, and lower scores mean it is easier to comprehend. Fan's team used an open-source Python package called "text stats" to calculate readability scores for CEO comments, with the hypothesis that easier-to-understand comments are more likely to lead to transparent and good performance from companies. The team then used quantile screening to evaluate different features and recommend the best-performing ones to investment teams.

  • 00:20:00 Yuyu Fan of Alliance Bernstein discusses how sentiment analysis can be used to extract insights from CEO speech transcripts. Fan explains that sentiment scores can be calculated using dictionary-based approaches, such as using generic or proprietary dictionaries that are specifically designed for financial research. The results show that sentiment analysis based on the LM dictionary carries more investment signal, especially for US small-cap companies. Monthly rebalancing is used, and the companies are ranked by sector-neutral quintiles. The results for each quintile are more differentiable when using sentiment analysis, indicating higher sentiment leads to better performance.

  • 00:25:00 Yuyu Fan from Alliance Bernstein explains how their team utilized text mining to extract insights and evaluate speaker sentiment. They analyzed the differences between CEO sentiment and analyst sentiment, finding that analyst sentiment may be a more reliable indicator due to CEOs potentially distorting analysis results towards their speech. They also delved into natural language understanding, specifically utilizing the Transformer model called BERT. BERT utilizes bi-directional encoder representation, meaning that it takes into account surrounding information on the left and right to better predict a specific word's meaning within its context.

  • 00:30:00 Yuyu Fan explains how the BERT (Bidirectional Encoder Representations from Transformers) model works for sentiment analysis. The encoder part of the model is used for lateral language understanding (language understanding without the need for translation). The embeddings from this part of the model can represent information from the entire sentence and can be fine-tuned to create a sentiment classification model. By using pre-trained BERT models and adding a downstream sentiment classification task, fine-tuning is made much easier. The sentiment labels are added through self-labeling and given labels through external datasets, and the model is trained to predict sentiment scores in a range from -1 to 1. Finally, Fan shows that BERT-based sentiment analysis outperforms sentiment analysis based on dictionary sentiment analysis with examples from earnings call transcripts.

  • 00:35:00 Yuyu Fan from Alliance Bernstein discusses text mining and how a pre-trained BERT model can be fine-tuned with specific label sentences to improve classification of financial text. The pre-trained model’s large vocabulary coverage of English tokens allows for capturing combinations and generating words, but it may not capture specific financial language. When asked about performance on sentences with both positive and negative words, Yuyu Fan explains that the classification may depend on the analyst's interpretation and expectation, but the sentence itself can be classified as positive if it reports a 10% increase in revenue.

  • 00:40:00 Yuyu Fan from Alliance Bernstein explains that it's difficult to have a hard threshold for accuracy in sentiment analysis. While it may make a big difference in academia, in practical applications, it may not make much difference since a 90% accuracy and a 92% accuracy may lead to similar performance when aggregated to the section level using mean or standard deviation. Fan explains that their model has around 90% accuracy on all sentences, and their sentiment signal performs well on U.S. small cap companies, making it a signal that their investment teams recommend using. Fan also shares that they published a paper with more details on NLP features that could be used as quant signals to form efficient trading strategies, and they are currently working on data augmentation to improve the model.

  • 00:45:00 Yuyu Fan, a data scientist at Alliance Bernstein, discusses how their NLP features correlate with traditional fundamental and quantitative features. They found that correlations are generally low, with readability and accounts for sentiment having medium correlation around 0.54 for large-cap momentum. She also explains how they measure readability using packages such as tax stats and make customizations for their usage. Fan further clarifies their return methodology, where they track one month's returns and only include companies with the latest information available before the rebalance day, typically after quarterly earnings calls for large caps. Finally, she addresses a question on CO2 arbitrage and clarifies the difference between BERT and FinBERT, which they use in their method.

  • 00:50:00 Yuyu Fan discusses the use of text mining to extract insights. She mentions the development of a financial usage model of the BERT model, specifically focused on filings, earnings, and news related to finance. The model distinguishes between pre-trained versions and those that are fine-tuned, with labels for positive, negative, and neutral output probabilities. Fan notes that the accuracy of the model varies across different sectors, and they are exploring avenues for data augmentation to improve sentiment classification for specific topics. The section ends with a discussion on the process of converting audio data into transcripts for analysis.

  • 00:55:00 Yuyu Fan from Alliance Bernstein discusses the use of text mining to extract insights. The company uses SMT for high-quality vendor data, as well as transcription services and vendor solutions for collaborations. They are also experimenting with a model called Whisper from Open AI, which uses large-scale transformer models for audio transcription, including multilingual transcription. However, due to time constraints, the Q&A session ends there.
Yuyu Fan (Alliance Bernstein): "Leveraging Text Mining to Extract Insights"
Yuyu Fan (Alliance Bernstein): "Leveraging Text Mining to Extract Insights"
  • 2022.10.26
  • www.youtube.com
Welcome to the first of the UBS and CFEM AI, Data and Analytics Speaker Series!Yuyu Fan of Alliance Bernstein spoke about "Leveraging Text Mining to Extract ...
 

Ciamac Moallemi (Columbia): "Liquidity Provision and Automated Market Making"



Ciamac Moallemi (Columbia): "Liquidity Provision and Automated Market Making"

In this comprehensive discussion, Ciamac Moallemi, a professor from Columbia University, delves into the intricacies of liquidity provision and automated market making (AMM) from various angles. He emphasizes the relevance of AMMs in addressing the computational and storage challenges faced by blockchain platforms and their ability to generate positive returns for liquidity providers. To illustrate the concept, Moallemi presents the adverse selection cost for volatility in UniSwap V2, revealing an annual cost of approximately $39,000 on a $125 million pool. He emphasizes the significance of volatility and trading volume in determining liquidity provider returns and elucidates how AMMs handle arbitrageurs and informed traders.

Moallemi underscores the advantages of utilizing AMMs on the blockchain and explores the roles of pooled value functions and bonding functions. He highlights the importance of hedging risks and costs associated with rebalancing strategies. Furthermore, Moallemi introduces his own model for liquidity provision and automated market making, comparing it to actual data from the Ethereum blockchain. He discusses how his model can potentially enhance AMMs by reducing costs paid to intermediaries. Moallemi proposes various approaches to mitigate inefficiencies caused by suboptimal prices, such as utilizing an oracle as a data source and selling arbitrage rights to authorized participants, enabling them to trade against the pool without fees.

Additionally, Moallemi elucidates the advantages of AMMs over traditional limit order books, particularly in terms of simplicity and accessibility. He highlights how AMMs level the playing field for less sophisticated participants by eliminating the need for complex algorithms and extensive resources. Moallemi concludes by expressing optimism about the potential for better structures that benefit a wider range of participants, positioning AMMs as a step in the right direction.

  • 00:00:00 In this section, Ciamac Moallemi of Columbia University discusses liquidity provision and automated market making, focusing mainly on the automated market makers in the crypto world. He explains that the problem of trading is largely solved by electronic limit order books in traditional finance, but there are a couple of issues with adopting this structure wholesale in crypto. Moallemi discusses the computational and storage costs of using blockchain for trading and how automated market makers can address these issues by using pricing algorithms to quote both buy and sell prices for an asset, providing liquidity to the market.

  • 00:05:00 In this section, the speaker discusses the challenges of using limit order books for trading in an environment with high update rates and limited computation and storage. Market making requires the participation of active market makers and can be difficult to bootstrap in the cryptocurrency world, particularly for new tokens. To address these challenges, people have developed automated market makers (AMMs), which use liquidity providers to put a pool of assets such as ETH and US dollars into a pool. These AMMs are computationally efficient and do not require sorting or comparison, making them ideal for blockchain environments. Liquidity providers are paid fees for trading against the pool, and the price is set as a function of what is in the pool.

  • 00:10:00 In this section, Ciamac Moallemi discusses liquidity provision and automated market making (AMM) from the perspective of liquidity providers, outlining the costs and benefits of using AMM systems like UniSwap. Moallemi explains that while passive liquidity providers can earn fees through AMM systems, there is always a cost to market-making, such as adverse selection. Using a concrete example from UniSwap V2, Moallemi shows that the adverse selection cost for volatility is typically three basis points, resulting in an annualized cost of about 39k on a pool worth $125 million. Despite the cost, Moallemi notes that AMM systems can generate a positive return for liquidity providers, but it's essential to accurately assess the risks and costs involved.

  • 00:15:00 In this section, Ciamac Moallemi of Columbia University discusses liquidity provision and automated market making. Moallemi highlights the drivers behind liquidity provider returns, specifically the importance of volatility and trading volume. He also describes a back-of-the-envelope calculation for LP return decomposition, which includes hedge return, trading fees, and lever cost. Moallemi explains that the lever cost is an adverse selection cost arising from prices being set on a centralized exchange and AMMs trading at inaccurate prices, suffering slippage. This leads to arbitrage opportunities, with arbitragers profiting from the pool, resulting in a zero-sum game. The differences between informed traders and arbitragers are also discussed.

  • 00:20:00 In this section, Ciamac Moallemi discusses the difficulty in measuring informed trading and how volatility comes about, using a classic model of adverse selection. He also talks about option pricing interpretation and the convenience of working in continuous time using closed forum formulas. Moallemi mentions other popular topics in the market making world such as prediction markets and automated market makers. He then explains how blockchain works as a computer, keeping track of transitions and payments, with Ethereum being a more complex and expensive version of the system. Despite being slow and costly, blockchain is still a vital part of trading and prediction markets.

  • 00:25:00 In this section, Ciamac Moallemi discusses the importance of finance in the utilization of slow computers, especially in scenarios that require small transactions or simple computational tasks. He shows a chart that reflects the percentage of resource spent on the Ethereum system on finance-related applications, with trading being the biggest subcategory, and Uniswap being the most significant protocol or smart contract. Although most trading in crypto occurs on exchanges, decentralized exchanges like Uniswap are also significant, with an aggregate of about one trillion dollars traded. Moallemi presents a continuous time Black-Scholes set-up with stochastic volatility as a model for trading two assets, a risky asset denoted by "x" and a riskless asset called the numeraire denoted by "y," with the market price being the price of the risky asset in terms of the numerator.

  • 00:30:00 In this section of the video, Ciamac Moallemi explains the concept of liquidity provision and automated market making in the financial industry, and how they work. He talks about how the returns of this process are a random walk and how this concept is a very standard model in finance. He then goes on to explain the concept of an automated market maker as a constant function market maker, where the function is kept constant by the bonding function. The liquidity providers contribute reserves, and traders can only move to another point on the curve, which keeps the function constant. The slope of the diagonal line gives the instantaneous price. The mechanism knows what inventory it has in order to satisfy the invariant and say yes or no to trades.

  • 00:35:00 In this section, Ciamac Moallemi discusses the benefits of using automated market making (AMM) on the blockchain as opposed to traditional exchanges like Binance. Being on the blockchain allows for different services such as collateralized lending or portfolio optimization, which are not possible on traditional exchanges. Additionally, trading on the blockchain can be a subroutine of a computer program, a feature not available on traditional exchanges. While fees for AMMs are proportional to quantity, the liquidity provider's level can change over time, affecting fee distribution. The market model involves two types of traders: arbitragers who constantly monitor the centralized exchange and external market and noise traders who get utility from being on the blockchain. The analysis assumes constant liquidity providers, cash payments for fees, and ignores discrete versus continuous time in the blockchain.

  • 00:40:00 In this section, Ciamac Moallemi explains the concept of liquidity provision and automated market making. He uses the example of trading x for y and the slowest part being the rate at which it can be traded. He describes how the problem is better described by moving to dual variables which are prices supporting hyperplanes. He explains that the pool value function is a critical object, and the assumption is made that the function is smooth and twice continuously differentiable. Moallemi also discusses the constant product case and the properties of the bonding function, which the arbitragers are incentivized to balance. The arbitragers are constantly monitoring the market, leaving the least value in the pool to make the most money possible.

  • 00:45:00 In this section, Ciamac Moallemi of Columbia University discusses the key ingredients needed for liquidity provision and automated market making, including the rebalancing strategy and the use of arbitrage. The rebalancing strategy involves buying and selling risky assets in the same way as the arbitragers do, but trading on the centralized exchange for fair market prices. The lever loss versus rebalancing theorem characterizes the process as non-negative, non-decreasing, and predictable, showing that the pool reserve values systematically lose money compared to trading on the exchange. These key ingredients are important for effective liquidity provision and market making.

  • 00:50:00 In this section, Ciamac Moallemi discusses the risks and costs associated with the rebalancing strategy and how they can potentially be hedged. He explains that the instantaneous change in value of a portfolio has two components: the first component is market risk which means that it's exposed to the market, while the second component is locally riskless and predictable, but it has a systematic running cost. Moallemi further breaks down the formula for the instantaneous lever and how it is affected by the instantaneous variance and the amount of liquidity available at the current price level. He also demonstrates how the formula can be applied to the constant product market maker.

  • 00:55:00 In this section, Moallemi explains that the pool's value will never go to zero, but it may become less valuable than an alternative. The rebalancing strategy may systematically make money over time, as it sells when prices go up and buys when prices go down. Additionally, Moallemi discusses the assumptions of the model, stating that for the most liquid pools, it is reasonable to assume an external market, but for the long tail, this is not a good assumption. However, the model is still useful as a predictive model and gives consistent pricing. Moallemi then explains a way to use the model as a predictive model, looking at the LP collected fees and the change in the pool value, and hedging the market risk with negative rebalancing trades on finance.

  • 01:00:00 In this section, Ciamac Moallemi of Columbia University discusses the empirical results of his model for liquidity provision and automated market making. He compares the hedged profit and loss (P&L) of using his formula versus actual data from the Ethereum blockchain, and finds that they are close, indicating that his model is correct. He then looks at the daily volatility and P&L fluctuations of an example Uniswap pool with $200 million in coins. The P&L fluctuations are due to market risk, and Moallemi demonstrates that they can be hedged using his formula, resulting in a positive return and high Sharpe ratio, even though it may not necessarily be a money-making strategy due to trading and financing costs. He suggests that this tool can be used to improve automated market makers by reducing costs paid to intermediaries.

  • 01:05:00 In this section, Ciamac Moallemi discusses ways to mitigate the inefficiencies caused by bad prices in liquidity provision and automated market making. He suggests using an oracle as a data source to take prices from exchanges like Binance to incorporate external prices into smart contracts and prevent off-market trading. Additionally, Moallemi presents the idea of selling arbitrage rights to authorized participants who can trade against the pool without paying fees, giving them priority to take advantage of smaller price discrepancies and make money. These participants would give some of their profit back to the LPS to help mitigate bad prices and ensure that both the LPs and noise traders benefit from the strategy. Moallemi also addresses questions about implementing AMMs to trade on Binance and shorting in crypto markets. He notes that shorting can be expensive due to funding costs and that volume and volatility are highly correlated, which can make the strategy of being long volume but short volatility risky.

  • 01:10:00 In this section, Moallemi explains the issues with Requesting for Quote (RFQ) protocol in smart contracts, as it requires waiting for others to respond, which breaks the atomicity of smart contracts. However, a popular indirect alternative, called just-in-time liquidity, can be used to front run and provide liquidity to big orders before they've been processed. Moallemi also explains how traditional market makers hedge their risk and holds shares for short periods before selling, while liquidity providers should also hedge to manage market risk. The liquidity pool model works best in cryptocurrencies because they have slow computers and a large number of coins available.

  • 01:15:00 In this section, the speaker discusses the advantages of automated market makers (AMMs) over limit order books, particularly in terms of simplicity and accessibility. They explain that the complex nature of limit order books makes them difficult for market makers and even traders to use without algorithms and an army of PHDs, creating a more level playing field for institutional investors who also use algorithms. However, AMMs simplify the process, allowing average participants to benefit without the need for extensive knowledge or resources. The speaker sees potential for better structures that benefit less sophisticated participants, making AMMs a step in the right direction.
Ciamac Moallemi (Columbia): "Liquidity Provision and Automated Market Making"
Ciamac Moallemi (Columbia): "Liquidity Provision and Automated Market Making"
  • 2022.09.14
  • www.youtube.com
Abstract: In recent years, automated market makers (AMMs) and, more specifically, constant function market makers (CFMMs) such as Uniswap, have emerged as t...
 

Andreea Minca (Cornell ORIE): Clustering Heterogeneous Financial Networks



Andreea Minca (Cornell ORIE): Clustering Heterogeneous Financial Networks

Professor Andreea Minca, a renowned expert in the field of financial networks at Cornell ORIE, has dedicated her research to exploring the complexities of clustering heterogeneous financial networks. She introduces an innovative regularization term to tackle the unique challenges posed by these networks, particularly the presence of outliers with arbitrary connection patterns. These outliers hinder the performance of spectral clustering algorithms and transform clustering into a computationally challenging problem known as NP-hard combinatorial problem.

To identify these outliers based on their connection patterns, Minca utilizes the stochastic block model and degree-corrected stochastic block model. These models offer theoretical guarantees for precise recovery without making assumptions about the outlier nodes, except for knowing their numbers. The heterogeneity inherent in financial networks further complicates the detection of outliers based solely on node degrees.

Minca delves into the process of partitioning the network into clusters and outliers by constructing a partition matrix and a permutation of nodes. She exemplifies this approach by applying it to analyze the Korean banking system. Additionally, Minca employs a Gibbs sampler to fill gaps in the network, enabling efficient risk allocation and diversification of investments by clustering overlapping portfolios based on their strength and level of overlap.

In her work, Minca emphasizes the importance of generating clusters that exhibit meaningful inter-connectivity rather than clusters with no connectivity. She proposes an approach that offers five alternatives for diversification under a cluster risk parity framework, highlighting the need for careful consideration when using clustering algorithms for achieving diversification in financial networks. Minca advises quantifying the performance of clustering algorithms using standard investment categories and emphasizes the significance of informed decision-making when utilizing these techniques.

Overall, Professor Andreea Minca's research provides valuable insights into the intricacies of clustering heterogeneous financial networks, offering innovative approaches and practical solutions to address the challenges associated with these networks. Her work contributes to the advancement of risk analysis, portfolio selection, and understanding the structural dynamics of financial systems.

  • 00:00:00 Professor Andreea Minca discusses her work on developing algorithms to cluster financial networks based on two different examples. The first example is on networks of overlapping portfolios with applications in portfolio selection, and the second example is on the network of exposures, which is related to systemic risk analysis and understanding the level of risk in the network. The goal is to match clustering algorithms to financial networks and create meaningful clusters that are vulnerable to illiquidity or default in one institution. The larger the clusters, the larger the potential impact of stress on a member in that cluster, highlighting the importance of understanding financial network structures.

  • 00:05:00 Andreea Minca discusses the challenges of clustering financial networks, which are common in all real-world networks. The problem of clustering exists because nodes tend to form groups where intra-group connectivity is larger than inter-group connectivity. There are various clustering algorithms, but the heterogeneity of financial networks poses a challenge as financial networks exhibit heterogeneity in terms of degrees, weights, and different inter-community connectivity. Additionally, the presence of outliers makes it challenging to apply off-the-shelf algorithms as they may have the same connection patterns as in-line nodes, but they cannot be treated as one cluster by themselves. Several issues affect the clustering of financial networks, making it difficult to apply existing algorithms.

  • 00:10:00 Andreea Minca from Cornell ORIE discusses the challenges faced when clustering heterogeneous financial networks and the introduction of a novel regularization term to overcome them. One of the main challenges is the presence of outliers that have arbitrary connection patterns and behave as adversaries, hindering the performance of clustering algorithms like spectral clustering. The clustering problem itself is an NP-hard combinatorial problem, which can be relaxed for a semi-definite program that has tractable algorithms. The goal is to prove that certain conditions hold for the recovery of the true clusters, and the introduced regularization term penalizes outliers with unusual connection patterns.

  • 00:15:00 Andreea Minca discusses the application of the stochastic block model and degree-corrected stochastic block model to detect clustering patterns in heterogeneous financial networks. The goal is to detect outliers based on their connection patterns. The theoretical guarantees provided ensure exact recovery without making assumptions about the outlier nodes, except for knowing their numbers. The density gap conditions are based on the difference between inter-cluster and intra-cluster edge density. The results are more robust than previous literature as they are independent of the number of outliers, and they only depend on the number of inliers. The heterogeneity in financial networks makes it difficult to detect outliers based on degrees, as nodes can have high degrees due to the structure of nodes in the same cluster.

  • 00:20:00 Andreea Minca explains the concept of heterogeneity in financial networks and how it affects clustering algorithms. She uses the example of the Korean banking system to illustrate how banks and insurance companies in the same sector can exhibit heterogeneity and should not be classified as outliers. Minca notes that the heavy-tailed degree distribution in financial networks requires a careful look at connectivity patterns and contributions to the bounds of each node's degree range. She also emphasizes the need for penalty terms in the algorithm that account for the degree since a homogeneous penalization cannot be used for all nodes. Finally, Minca outlines the fundamentals of the clustering model, which includes specifying heterogeneity parameters and connectivity matrices for each cluster.

  • 00:25:00 Andreea Minca discusses the challenges of clustering in the presence of outliers when using off-the-shelf clustering methods. The goal of detecting outliers is to raise a red alert without hindering the clustering itself or mistakenly hindering the classification of in-layer notes. By adjusting the connectivity through the heterogeneity parameters, the adjacency matrix can be written in a way that corresponds to the first outlier's adjacency matrix to find the permutation matrix that maps from the observed picture to the underlying structure of clusters and outliers. These adjustments help to accommodate a lot of modeling choices in financial networks.

  • 00:30:00 In this section of the video, Andreea Minca explains the process of finding a partition matrix and permutation of nodes to identify the structure of clusters and outliers in financial networks. The algorithm is based on finding a partition matrix that indicates which nodes belong to the same cluster, while arbitrary entries represent outliers. To illustrate the concept, Minca shows an example of a Korean financial network, where the algorithm's goal is to determine the correct identification of each sector present in the network.

  • 00:35:00 Andreea Minca, a professor at Cornell ORIE, discusses her work with creating semi-synthetic networks and testing algorithms. She explains that she creates a network based on data published by Bank of Korea on the size of assets and liabilities for all financial institutions and connects them based on an aggregate flow from any insurance company to any bank. She then uses a modularity maximization algorithm to identify which financial institutions belong to which sector based on the observed connectivity pattern. The algorithm also introduces a tuning parameter and a partition matrix constraint to minimize the modularity difference.

  • 00:40:00 Andreea Minca explains the challenge of searching for partition matrices in clustering heterogeneous financial networks, which is an intractable problem due to a specific constraint. To overcome this, a relaxation of the problem is introduced where the entries of the matrix are between 0 and 1, and it is positive semi-definite. The problem's heterogeneity is addressed through penalty terms, where a penalty on the diagonal term penalizes potential outliers whose degree is beyond the normal variation. Two tuning parameters control the strength of the diagonal penalization and are determined by the observed degree of the nodes, allowing the identification of outliers and those with a strong community membership. The Korean industry example used in the video is a snapshot of the exposures in the Korean network, and there is no time series component.

  • 00:45:00 Andreea Minca from Cornell ORIE discusses clustering heterogeneous financial networks and how to recreate sample networks that are consistent with aggregate values by using a Gibbs sampler to fill in gaps. The algorithm's performance can be tested via the misclassification rate and the probability of recovery, which tends to one at a certain speed as the sample size becomes large. Using the Korean sector as an example, Minca demonstrates how the connectivity matrix can represent the probability of connection between different sectors, and how the clustering results are obtained from the matrix.

  • 00:50:00 Andreea Minca discusses the challenge of identifying the correct financial sector for each institution in a network based on their connectivity patterns. She explains that an algorithm must be robust to the heterogeneity in connectivity and that misclassification rates are used as a performance criteria. Minca compares the misclassification rates of the algorithm she presented and other existing algorithms, highlighting that the spectral clustering-based algorithm is the worst performer. This emphasizes the need to tweak existing algorithms to account for the issues that financial networks encounter. Additionally, Minca briefly touches on the applications of the algorithm in investment portfolio management. By recreating a network of overlapping portfolios, institutions' interaction strengths can be measured based on their portfolio holdings, and it could potentially aid in investment decision-making.

  • 00:55:00 Andreea Minca discusses the clustering algorithm and how it can be applied to overlapping financial portfolios to efficiently allocate risk and diversify investments. By clustering portfolios based on their strength and level of overlap, the algorithm is able to recover five clusters that have grown larger over a decade, indicating increased overlap. This provides a tool for generating clusters that are more effective than other existing clustering methods. Additionally, Minca discusses how an additional algorithm fills in gaps for the Korean example and creates individual networks that are consistent with the aggregate data from the government.

  • 01:00:00 Andreea Minca discusses the issue of achieving diversification through clustering algorithms for financial networks. She shows that having one cluster with extremely high inter-cluster connectivity and another with no connectivity would not achieve diversification. Instead, she presents an approach that identifies five alternatives for diversification under a cluster risk parity approach. She also answers questions about the preprint of her work, the availability of a tool, and the sensitivity of the algorithm to the number of clusters, while also suggesting the use of standard investment categories to quantify the performance of clustering algorithms.

  • 01:05:00 Andreea Minca discusses the topic of clustering algorithms and the recovery of clusters, using the example of recovering five clusters from five investment strategies. She also notes that it can be difficult to compare the clustering results from different choices without good domain knowledge or assumptions about the number of clusters. However, there are no theoretical results on this matter, which highlights the importance of making well-informed decisions when using clustering algorithms.
Andreea Minca (Cornell ORIE): Clustering Heterogeneous Financial Networks
Andreea Minca (Cornell ORIE): Clustering Heterogeneous Financial Networks
  • 2022.04.27
  • www.youtube.com
Abstract: For the degree corrected stochastic block model in the presence of arbitrary or even adversarial outliers, we develop a convex-optimization-based c...
Reason: