Quantitative trading - page 12

 

16. Portfolio Management



16. Portfolio Management

The "Portfolio Management" video delves into a wide range of topics related to portfolio management, providing a comprehensive understanding of the subject. The instructor adopts a practical approach, connecting theory with real-life applications and personal experiences in the buy-side industry. Let's dive into the different sections covered in the video:

  • Intuitive Construction of Portfolios: The instructor initiates the class by encouraging students to intuitively construct portfolios on a blank page. By breaking down investments into percentages, they demonstrate how asset allocation plays a crucial role in portfolio management. Students are prompted to think about the allocation of their investments and how to utilize their funds from day one. This exercise helps students grasp the fundamentals of portfolio construction and provides insights into decision-making processes.

  • Theory Connecting with Practice: This section highlights the significance of observation as the first step towards learning something useful. The instructor explains that theories and models are built based on data collection and pattern recognition. However, in the field of economics, repeatable patterns are not always evident. To validate theories, observations must be confirmed or tested under various scenarios. Students are encouraged to share their portfolio constructions, fostering active participation and engagement.

  • Understanding Portfolio Management Goals: The instructor emphasizes the importance of understanding the goals of portfolio management before addressing how to group different assets or exposures together. They present a chart illustrating spending as a function of age, emphasizing that everyone's spending patterns are unique. Recognizing one's situation is crucial for establishing portfolio management goals effectively.

  • Balancing Spending and Earnings: The speaker introduces the concept of the spending and earning curve, highlighting the mismatch between the two. To bridge the gap, investments generating cash flows are necessary to balance earning and spending. The section also covers diverse financial planning scenarios, such as retirement planning, student loan repayment, pension fund management, and university endowment management. The challenges of allocating capital to traders with different strategies and parameters are discussed, with risk commonly measured by variance or standard deviation.

  • Return and Standard Deviation: This section delves into the relationship between return and standard deviation. The speaker explores the principles of modern portfolio theory, exemplifying them through special cases. Investments such as cash, lottery, coin flipping, government bonds, venture capitalist funding, and stocks are positioned on a return vs. standard deviation chart, providing a clearer understanding of the concepts.

  • Investment Choices and Efficient Frontier: The speaker delves into different investment choices and their placement on a map illustrating returns and volatility. They introduce the concept of the efficient frontier, which maximizes returns while minimizing standard deviation. The section focuses on a special case of a two-asset portfolio, explaining how to calculate standard deviation and variance. This overview enables viewers to grasp how portfolio theory can inform investment decisions.

  • Diversification Benefits and Risk Parity: The speaker investigates scenarios in portfolio management, highlighting the benefits of diversification. They discuss three cases: zero volatility and no correlation, unequal volatilities and zero correlation, and perfect positive or negative correlation. Diversification is emphasized as a strategy to reduce the standard deviation in a portfolio effectively.

  • Leveraging Portfolio Allocation: This section introduces the concept of leverage as a means to increase expected returns beyond equal weight allocation. By leveraging the bond-to-stock allocation, investors can potentially achieve higher expected returns. The speaker emphasizes the significance of balancing leverage to optimize risk and return.

  • Sharpe Ratio and Kelly's Formula: The video delves into the Sharpe ratio, also known as risk-weighted or risk-adjusted return, and Kelly's formula. While asset allocation plays a critical role in portfolio management, the video emphasizes that relying solely on the efficient frontier is insufficient. The section provides an example of a 60-40 portfolio to demonstrate the effectiveness of asset allocation but also its potential volatility.

  • Risk Parity and Portfolio Optimization: The concept of risk parity is introduced as an alternative to the traditional 60-40 asset allocation based on market value. Risk parity aims to achieve equal weighting of risk between two assets rather than market exposure, resulting in a lower standard deviation and reduced risk. The video emphasizes the idea of diversification as a source of a "free lunch," and a simple example is presented to illustrate how equal weighting of two assets can lead to a better outcome. Rebalancing is also discussed as a method to maintain the desired 50-50 asset weighting in a risk parity approach.

  • Diversification Benefits and Asset Combinations: The instructor discusses the concept of diversification benefits and how combining assets in a portfolio can reduce volatility. They specifically mention the 60/40 bond market and risk parity as strategies that aim to achieve an equal risk weighting in a portfolio. By diversifying across different asset classes, investors can potentially mitigate risk and enhance portfolio performance.

  • The Role of Leverage and Portfolio Efficiency: The speaker highlights the importance of leverage in portfolio allocation. They explain that adding leverage to a portfolio can enhance the efficient frontier, allowing for higher returns. However, it is crucial to carefully manage leverage to avoid excessive risk and potential losses. The section emphasizes the trade-off between risk and return when employing leverage in portfolio management.

  • Optimizing Risk-Adjusted Returns: The concept of the Sharpe ratio, a measure of risk-adjusted return, is discussed in relation to portfolio management. The video explains how maximizing the Sharpe ratio can lead to a risk parity portfolio and emphasizes that changing leverage does not affect the slope of the line on the curve. The speaker also touches on the relationship between beta and the standard deviation of the portfolio, with beta fluctuating based on market volatility.

  • Human vs. Robotic Portfolio Management: The speaker raises the question of whether a human hedge fund manager is necessary in today's era, considering the advancements in technology and algorithms. They mention the possibility of programming a robot to manage a portfolio effectively. However, the answer to this question is left for further exploration and discussion.

  • Unintended Consequences and Systemic Risks: The video demonstrates how the synchronization of events can lead to unintended consequences. Through examples like soldiers marching over a bridge or metronomes synchronizing without brains, the speaker highlights the risks of everyone implementing the same optimal strategy, potentially leading to system-wide collapse. The section emphasizes the need for continuous observation, data collection, model-building, and verification to address complex problems in portfolio management.

  • Limitations and Uncertainty in Portfolio Management: The video acknowledges the challenges of forecasting returns, volatility, and correlation in portfolio management. Historical data is often used to make predictions, but the future remains uncertain. The speaker discusses the limitations of estimating returns and volatilities, pointing out the ongoing debate within the field. They suggest exploring the book "Fortune's Formula" to gain insights into the history and ongoing discussions surrounding portfolio optimization.

Throughout the video, the instructor emphasizes the interconnectedness of individuals in the market and the importance of considering this aspect when optimizing portfolios. The speaker also underscores the role of game theory and the complexity of finance as compared to well-defined problems in physics. They highlight the significance of active observation, data-driven models, and adaptation to address challenges in portfolio management effectively. Lastly, the speaker acknowledges the critical role of management beyond investment decisions, particularly in areas such as HR and talent management.

  • The Importance of Risk Management: Risk management is a crucial aspect of portfolio management that cannot be overlooked. The video emphasizes the need for a comprehensive risk management strategy to protect investments and mitigate potential losses. The speaker discusses the various approaches to risk management, including diversification, hedging, and incorporating risk management tools such as stop-loss orders and trailing stops. They stress the importance of continuously monitoring and reassessing risk exposure to ensure the portfolio remains aligned with the investor's goals and risk tolerance.

  • Behavioral Factors in Portfolio Management: The video delves into the role of behavioral factors in portfolio management. The speaker highlights the impact of investor emotions, biases, and herd mentality on investment decisions. They discuss how these factors can lead to irrational behavior, market inefficiencies, and the formation of bubbles. Understanding and managing these behavioral biases is essential for successful portfolio management. The speaker suggests employing strategies such as disciplined investment processes, long-term thinking, and maintaining a diversified portfolio to counteract behavioral biases.

  • Dynamic Asset Allocation: The concept of dynamic asset allocation is introduced as a strategy that adjusts portfolio allocations based on changing market conditions and economic outlook. The speaker explains that dynamic asset allocation aims to take advantage of market opportunities while mitigating risks. They discuss the importance of monitoring market indicators, economic data, and geopolitical factors to make informed decisions regarding asset allocation. The video emphasizes the need for a flexible approach to portfolio management that adapts to evolving market dynamics.

  • Long-Term Investing and Patience: The video emphasizes the benefits of long-term investing and the importance of patience in achieving investment objectives. The speaker discusses the power of compounding returns over time and the advantages of staying invested through market fluctuations. They emphasize the potential pitfalls of short-term thinking and reactive decision-making. The video encourages investors to adopt a long-term perspective, maintain a well-diversified portfolio, and resist the urge to make impulsive investment decisions based on short-term market volatility.

  • Continuous Learning and Adaptation: The field of portfolio management is constantly evolving, and the video underscores the importance of continuous learning and adaptation. The speaker encourages viewers to stay updated with the latest research, market trends, and technological advancements in the investment industry. They highlight the value of professional development, attending seminars, and networking with peers to enhance knowledge and skills in portfolio management. The video concludes by emphasizing that successful portfolio management requires a commitment to ongoing education and adaptation to changing market dynamics.

In summary, the video provides a comprehensive exploration of various aspects of portfolio management. It covers intuitive portfolio construction, the relationship between risk and return, the concept of risk parity, the efficient frontier, the role of leverage, and the importance of risk management. It also delves into behavioral factors, dynamic asset allocation, long-term investing, and the need for continuous learning and adaptation. By understanding these principles and implementing sound portfolio management strategies, investors can strive to achieve their financial goals while effectively managing risk.

  • 00:00:00 In this section, the instructor discusses the application of modern portfolio theory and shares personal experiences of using it in different areas, focusing on the buy-side perspective. The instructor starts the class by having students intuitively construct a portfolio using a blank page, explaining the meaning of a portfolio and giving examples of how to approach it. The goal of the exercise is to show students how they can break down the percentage of their investments, whether it's a small amount or a large portfolio, and to think about how to use the money on day one. The instructor will then gather the ideas and put them on the blackboard, possibly asking questions to the students about their choices.

  • 00:05:00 In this section, the instructor talks about how theory connects with practice, explaining that observation is the first step towards learning something useful. Once data collection and pattern recognition are completed, theories and models can be built to explain the phenomenon. Unlike in physics, repeatable patterns are not always obvious in economics. After developing a theory, observations must be confirmed or checked for special cases to understand if the model works or not. The instructor then asks the class to return their portfolio constructions and says that there will be no more slides to ensure that the class keeps up with him.

  • 00:10:00 In this section of the video, the speaker presents a list of various assets that people have high conviction in, including small cap equities, bonds, real estate, commodities, quantitative strategies, selection strategies, deep value models, and more. They then ask the question of how to group these assets or exposures together and explain that before answering that question, it is essential to understand the goals of portfolio management. They present a chart that plots spending as a function of age, highlighting the fact that everyone's spending pattern is different and that knowing your situation is critical for understanding portfolio management goals.

  • 00:15:00 In this section, the speaker explains the spending and earnings curve, and how they don't always match up. In order to make up the difference, one must have an investment that generates cash flows to balance earning and spending. Different situations require different financial planning, such as retiring at a certain age, paying off student loans in one year, or managing a pension fund or university endowment. The speaker also discusses the challenges of allocating capital to traders with different strategies and parameters, and how risk is not well-defined but is typically measured by variance or standard deviation.

  • 00:20:00 In this section, the speaker discusses the relationship between return and standard deviation, with the understanding that standard deviation cannot go negative while return can go below zero. They review the Harry Markowitz modern portfolio theory and provide special cases as examples to help understand the concepts better. The speaker also provides examples of where certain investments, such as cash, lottery, coin flipping, government bonds, venture capitalist funding, and buying stocks, would fall in the return vs. standard deviation chart.

  • 00:25:00 In this section, the speaker discusses different investment choices and their corresponding place on a map that shows higher and lower volatility and returns. The speaker explains how to pick investments based on the efficient frontier, which is a possible combination of investments that maximizes returns and minimizes standard deviation. The speaker reduces this to a special case of two assets and explains how to calculate the standard deviation and variance of that portfolio. Overall, this section provides an overview of how to use portfolio theory to pick investments.

  • 00:30:00 In this section, the speaker goes over various scenarios in portfolio management. Firstly, when sigma 1 is equal to 0 and sigma 2 is not equal to 0, and there is no volatility in the portfolio, hence there is no correlation. Secondly, when sigma 1 is not equal to 0, but sigma y is equal to sigma 2 and they are uncorrelated. In this case, diversification can help to lower the standard deviation of the portfolio. Finally, when assets are perfectly correlated, they end up at one point, and when they are negatively correlated, the portfolio is at its lowest point. The speaker emphasizes the importance of diversification in reducing the standard deviation in a portfolio.

  • 00:35:00 In this section of the video, the speaker talks about different cases in portfolio management. He explains that when cash is added to the portfolio, it becomes a riskless asset and can be combined with non-cash assets to create a higher efficient frontier and higher returns. He also notes that when the weights of assets are at both extremes, the returns are the same, but when weights are balanced, the variance can be reduced to zero. Finally, the speaker discusses the slope of the line and its relation to the capital market line and efficient frontier.

  • 00:40:00 In this section, the speaker discusses the concept of the efficient frontier for portfolio management, focusing on examples of two and three assets. He explains that for two assets with a negative correlation of one, the variance can be minimized to zero with a quadratic function. For three assets with equal volatilities and zero correlation, the variance of the efficient frontier can be minimized to 1 over the square root of three times sigma 1. The speaker emphasizes that the two-asset example is significant in practice for comparing combinations, such as the popular 60-40 benchmark of equity and bonds, and leads to the discussion of beta and the Sharpe ratio.

  • 00:45:00 In this section, the concept of the Sharpe ratio, also known as risk-weighted or risk-adjusted return, and Kelly's formula are discussed. It is explained that while asset allocation is critical in portfolio management, simply using the efficient frontier to determine asset weights and strategies to choose is not sufficient. The 60-40 portfolio example is given to show how asset allocation can be effective but also volatile, as demonstrated by the 2000 tech bubble and the 2008 financial crisis.

  • 00:50:00 In this section, the concept of risk parity is introduced as an alternative to the traditional 60-40 allocation of assets based on market value. Risk parity involves equal weighting of risk between two assets, as opposed to market exposure, in order to achieve a lower standard deviation and risk. The idea of diversification as a source of a "free lunch" is also discussed, with a simple example given to demonstrate how an equal weighting of two assets may lead to a better outcome. The concept of rebalancing is introduced as a way to maintain the 50-50 weighting of assets in the risk parity approach.

  • 00:55:00 In this section, the instructor discusses the concept of diversification benefits and how it can be achieved through combining assets in a portfolio to reduce volatility. He talks about the 60/40 bond market and risk parity, which aims to achieve an equal risk weighting in a portfolio. The concept of leverage is introduced when discussing how to go beyond the equal weight allocation and create more risk. The instructor proposes leveraging the 25/75 bond-to-stock allocation to achieve higher expected returns.

  • 01:00:00 In this section, the speaker discusses the relationship between leverage, standard deviation, and the Sharpe ratio in a risk parity portfolio. They explain that by maximizing the Sharpe ratio, one can achieve a risk parity portfolio and that changing leverage does not affect the slope of the line on the curve. They also touch on the relationship between beta and the standard deviation of the portfolio, with beta increasing or decreasing depending on the volatility of the market. Finally, the speaker poses the question of why anyone needs a hedge fund manager when one can program a robot to manage a portfolio, but leaves the answer to this question for later.

  • 01:05:00 In this section, the video demonstrates how the synchronization of events can create unintended consequences. The example of soldiers marching over a bridge illustrates how the force of people moving in sync can create an imbalance that causes things to collapse. The same phenomenon applies to portfolios when everyone implements the same optimal strategy, creating a system that is in danger of collapse. The video shows another example using metronomes that synchronize without having brains. This phenomenon is explained in a book, and the demonstration creates a significant impact.

  • 01:10:00 In this section, the speaker discusses the concept of maximizing results by taking into consideration that all the individuals in the market are interconnected. They emphasize that finding a stationary, best way of optimizing your portfolio may lead to everyone figuring out the same thing and ultimately leading to losses. The speaker also mentions that the field of finance, particularly quantitative finance, is not predictable and is not a mechanical process like solving physics problems. The idea of observing, collecting data, building models, verifying, and observing again is crucial to addressing problems. The speaker explains that game theory plays a significant role in the market situation, but it is more complex than a well-defined set of rules. Finally, the concept of risk parity portfolios is discussed, pointing out that the portfolio's success may hinge on how well you can accurately determine which asset has low volatility.

  • 01:15:00 In this section, the speaker discusses a risk parity approach to portfolio management, where bonds are given an overweight due to their lower volatility. However, the portfolio can still perform poorly if bonds experience a sell-off, as seen after Bernanke announced tapering of quantitative easing. This raises the question of whether the risk parity approach is effective or not. The speaker notes that historical data is used to forecast volatility, return, and correlation, but the future is always uncertain. Additionally, career investors tend to benchmark and follow the herd, which hinders discovering new asset classes or inventing new strategies. Finally, while computers are beating humans in many ways, it's unclear if they can ever completely replace human investment managers. The speaker also notes that management has a key role in HR and talent management, not just focusing on investments.

  • 01:20:00 In this section, the speaker talks about risk and how it is not best measured just by volatility or standard deviation. He explains that while risk can be looked at through many lenses, only focusing on expected return is the only answer to portfolio management theory. However, the speaker disagrees, stating that it is important to differentiate between two managers with the same expected return and that this is where the debate lies. The section ends with a discussion on the limitations of estimating returns and volatilities.

  • 01:25:00 In this section, the speakers discuss the difficulty of forecasting returns, volatility, and correlation in portfolio management. They suggest that the risk parity portfolio focuses on equalizing risk rather than returns and may be a better strategy. Additionally, they mention the Kelly criterion, which deals with the issues of multi-period investments and optimal betting with one's bankroll. They recommend looking into the book "Fortune's Formula" to learn more about the history and debate around portfolio optimization.
16. Portfolio Management
16. Portfolio Management
  • 2015.01.06
  • www.youtube.com
MIT 18.S096 Topics in Mathematics with Applications in Finance, Fall 2013View the complete course: http://ocw.mit.edu/18-S096F13Instructor: Jake XiaThis lect...
 

17. Stochastic Processes II



17. Stochastic Processes II

In this section of the video series, the concept of Brownian motion is introduced as a solution to the difficulty of handling the probability density of a path in a stochastic process, particularly in the case of a continuous variable. Brownian motion is a probability distribution over the set of continuous functions from positive reals to the reals. It has properties that make it a reasonable model for various phenomena, such as observing the movement of pollen in water or predicting the behavior of stock prices.

Additionally, the video introduces the concept of Ito's calculus, which is an extension of classical calculus to the stochastic processes setting. Traditional calculus does not work with Brownian motion, and Ito's calculus provides a solution for modeling the percentile difference in stock prices. Ito's lemma, derived from Taylor's expansion, is a fundamental tool in stochastic calculus that allows for the calculation of the difference of a function over a small time increase using Brownian motion. It enriches the theory of calculus and enables the analysis of processes involving Brownian motion.

The video also discusses the properties of Brownian motion, such as the fact that it is nowhere differentiable and crosses the t-axis infinitely often. Despite these characteristics, Brownian motion has real-life implications and can be used as a physical model for quantities like stock prices. The limit of a simple random walk is a Brownian motion, and this observation helps in understanding its behavior.

Furthermore, the video explores the distribution of a sum of random variables and its expectation in the context of Brownian motion. It discusses the convergence of the sum of normal variables and applies it to Brownian motions.

In summary, this section of the video series introduces Brownian motion as a solution for handling the probability density of a path in a stochastic process. It explains the properties of Brownian motion, its application in modeling stock prices and financial derivatives, and the need for Ito's calculus to work with it. Understanding these concepts is essential for analyzing continuous time stochastic processes and their applications in various fields.

  • 00:00:00 In this section, the professor introduces the topic of continuous stochastic processes and reminds students to review concepts such as martingales and Markov chains, which will be used in upcoming lectures. He also explains that unlike in discrete time processes, the underlying time variable is continuous in continuous time processes. This leads to the difficulty of describing the probability distribution without using indirect methods, as it would require an infinite number of intervals to describe the continuous time process.

  • 00:05:00 In this section of the video, the speaker discusses the difficulty of handling the probability density of a path in a stochastic process, particularly in the case of a continuous variable. They introduce the concept of Brownian motion as a solution to this problem, which is a probability distribution over the set of continuous functions from positive reals to the reals. This distribution ensures that the process always starts at 0, has stationary increments with a normal distribution, and independent increments between non-overlapping intervals. Although this distribution is very complicated, it is necessary to describe the probability of the path happening when dealing with a continuous time variable.

  • 00:10:00 In this section, the professor discusses the probability distribution of a Brownian motion and how it satisfies certain conditions which make it very difficult to prove. The space of all possible paths makes it a complicated probability space. The professor then explains how the Brownian motion is the limit of simple random walks and discusses its other names such as Wiener process. He concludes by stating that the next few lectures will reveal the importance of studying continuous time stochastic processes.

  • 00:15:00 In this section, the concept of taking the limit is discussed in relation to the Brownian motion and how it can be used to model stock prices. By taking a simple random walk, scaling it from time 0 to time 1, and linearly extending the intermediate values, the resulting distribution is a Brownian motion. This process is not new; it is the limit of these objects that we already know of. This observation has implications when using the Brownian motion as a physical model for some quantity, such as stock prices. Brownian motion was discovered by botanist Brown in the 1800s when observing a pollen particle in water, leading to the realization that there is a continuous jittery motion, known today as Brownian motion.

  • 00:20:00 In this section, the speaker discusses the concept of Brownian motion and why it's a reasonable model for certain phenomena such as observing the movement of pollen in water or predicting the behavior of stock prices. Brown discovered that the motion of pollen in water is a Brownian motion to the left and right, but Einstein was the first to explain it rigorously and provide insights. The speaker explains that tiny water molecules behave infinitesimally and move crazily in the water. When these collide with the pollen, they change its direction a little bit. Similarly, if you look at the price of a stock in tiny scales, you'll see that the price keeps fluctuating, pushing it up or down. In both cases, the limit of a simple random walk is a Brownian motion and thus, make it a reasonable model to use.

  • 00:25:00 In this section, the speaker explains some properties of the curve that deviates from Brownian motion, including the fact that it crosses the t-axis infinitely often, does not deviate too much from the curve y=sqrt(t), and is nowhere differentiable. Although this may seem surprising and even problematic, it has real-life implications and a modified version of calculus, called Ito's calculus, can be used to analyze it.

  • 00:30:00 In this section, the concept of Ito's calculus is introduced as an extension of classical calculus to the stochastic processes setting. However, only basic properties and computations of it will be covered due to time constraints. Before delving into Ito's calculus, the properties of Brownian motion are discussed, in particular, as a model for stock prices. The distribution of the min value and max value for stock prices using Brownian motion as a model is computed and it is shown that for all t, the probability of having M(t) greater than a and positive a is equal to 2 times the probability of having the Brownian motion greater than a. The proof involves the use of stopping time to record the first time the Brownian motion hits the line a.

  • 00:35:00 In this section, the speaker discusses the probability of a Brownian motion hitting a certain line (a) before time t and what happens afterwards. If the motion hits the line before time t, the probability of it ending up above or below a is the same because the path can be reflected. The speaker then goes on to explain how this probability is related to the maximum at time t being greater than a. By rearranging the given probabilities, the speaker shows that the probability of a maximum at time t being greater than a is equal to twice the probability that the Brownian motion is greater than a.

  • 00:40:00 In this section, the speaker discusses the computation of the probability that the maximum of a stochastic process is greater than a given value at a particular time. There are only two possibilities after tau_a: it either increases or decreases, and both events have the same probability. The speaker also proves that the Brownian motion is not differentiable at any given time with probability equal to 1 and uses the mean value theorem to explain that the maximum gain in the time interval from t to t plus epsilon is a times epsilon.

  • 00:45:00 In this section, the speaker discusses properties of Brownian motion and quadratic variation, which will be important in Ito's calculus. The speaker explains that if a Brownian motion is differentiable, it should have always been inside a cone up to a certain point, but this cannot happen, as the maximum value over a certain time interval is always greater than a certain value. The speaker then introduces the concept of quadratic variation and explains its importance in calculus, where a function is chopped up into n pieces within the time interval.

  • 00:50:00 In this section, the speaker discusses quadratic variation and its implications for Brownian motion. Quadratic variation involves taking the difference between consecutive points in a function and squaring it, then summing it as n goes to infinity. For Brownian motion, the limit of this sum goes to T, but for continuously differentiable functions, the quadratic variation is 0. The non-differentiability of Brownian motion has important implications, such as being able to model stock prices and diffusion processes.

  • 00:55:00 In this section, the professor discusses the distribution of a sum of random variables and its expectation while exploring Brownian motion. He explains that the sum of normal variables with a mean of T over n converges to T over n using the strong law of large numbers. He then mentions that this applies to all Brownian motions with probability one.

  • 01:00:00 In this section, the speaker talks about Ito's calculus and its motivation. He discusses how Brownian motion is not a bad model for stock prices but it is not ideal because instead of the differences, the percentile difference is required to be normally distributed. This means that the differential equation for modelling the percentile difference of stock prices follows Brownian motion. However, classical calculus does not work in this case because Brownian motion is not differentiable. This requires something else, and that is where Ito's calculus comes in. The speaker also explains how Ito's calculus can be useful for estimating infinitesimal differences, and it can be helpful to price options.

  • 01:05:00 In this section, the speaker discusses the concept of financial derivatives, which is a function applied to an underlying financial asset. He explains that understanding the difference in value with respect to the difference in the underlying asset is crucial. However, the speaker acknowledges that it's hard to differentiate Brownian motion, and instead, he focuses on calculating the minuscule difference of dBt and uses it to describe the change of the function in terms of the differentiation of f. The speaker then explains that the differentiation is not valid because of the factor dB squared equals dt, which he further explains.

  • 01:10:00 In this section, the concept of Ito's lemma is introduced as a fundamental tool in stochastic calculus. Ito's lemma is derived from Taylor's expansion and allows for the calculation of the difference of a function over a small time increase using Brownian motion. The lemma is considered nontrivial and highly cited in research papers, as it enables calculus with Brownian motion and greatly enriches the theory of calculus. This section stresses the importance of Ito's lemma in stochastic calculus.

  • 01:15:00 In this section, the speaker explains that dB_t squared is equal to dt, which is due to B_t being like a normal random variable with a mean of 0 and a variance of t. Calculus using Brownian motion becomes more complex because of this computation. The speaker encourages the viewers to think about the concept and mentions that he will review it again.
17. Stochastic Processes II
17. Stochastic Processes II
  • 2015.01.06
  • www.youtube.com
MIT 18.S096 Topics in Mathematics with Applications in Finance, Fall 2013View the complete course: http://ocw.mit.edu/18-S096F13Instructor: Choongbum LeeThis...
 

18. Itō Calculus



18. Itō Calculus

In this comprehensive video on Ito calculus, a wide range of topics related to stochastic processes and calculus is covered. The professor delves into the intricacies of Ito's lemma, a more sophisticated version of the original, and provides a detailed explanation of the quadratic variation of Brownian motion. The concept of drift in a stochastic process is explored, along with practical demonstrations of how Ito's lemma can be applied to evaluate such processes. The video also touches upon integration and the Riemannian sum type description of integration, adapted processes, and martingales. The importance of practicing basic computation exercises to gain familiarity with the subject is emphasized. Furthermore, the video concludes by giving a preview of the upcoming topic, the Girsanov theorem.

In the subsequent section of the video, the professor continues the discussion on Ito calculus by reviewing and presenting Ito's lemma in a slightly more general form. Through the use of Taylor expansion, the professor analyzes the changes in a function, f, when its first and second variables vary. The professor leverages Brownian motion to evaluate f(t, B_t). By incorporating the quadratic variation of Brownian motion and the two variables, t and x, the video provides an explanation as to why Ito calculus differs from classical calculus by incorporating an additional term. Moving on, the video focuses on the second-order term in Taylor expansion, expressed in terms of partial derivatives. The crucial terms, namely del f over del t dt, del f over del x dx, and the second-order terms, are examined. By rearranging these terms, a more sophisticated form of Ito's lemma is derived, incorporating an additional term. The video demonstrates that the terms involving dB_t square and dt times dB_t are insignificant compared to the term involving the second derivative of f with respect to x, as it survives due to its equivalence to dt. This leads to a refined understanding of Ito calculus.

The video proceeds by introducing the concept of a stochastic process with a drift term resulting from the addition of a term to a Brownian motion. This type of process becomes the primary object of study, where the difference can be expressed in terms of a drift term and a Brownian motion term. The general form of Ito's lemma is explained, which deviates from the original form due to the presence of quadratic variation. Furthermore, the video employs Ito's lemma to evaluate stochastic processes. The quadratic variation allows for the separation of the second derivative term, enabling the derivation of complex terms. An example involving the function f(x) = x^2 is presented, demonstrating how to compute d of f at B_t. The first partial derivative of f with respect to t is determined to be 0, while the partial derivative with respect to x is 2x, with the second derivative being 2 at t, x.

The video proceeds to explain the calculation of d of f at t comma B of t. The formula includes terms such as partial f over partial t dt, partial f over partial x dB_t, and 1/2 partial square f over partial x square of dB_t square, which is equal to dt. Examples are provided to aid in understanding how to utilize these formulas and how to substitute the variables. The distinction between sigma and a variable sigma prime in the formula and when to apply them is also explained. Brownian motion is used as the basis for this formula, as it represents the simplest form.

In the subsequent section, the professor addresses the proposed model for stock price using Brownian motion, stating that S_t is not equal to e to the sigma times B of t. Although this expression yields an expected value of 0, it introduces drift. To resolve this, the term 1/2 of sigma square times dt is subtracted from the expression, resulting in the new model S of t equals e to the minus 1 over 2 sigma square t plus sigma times B_t. This represents a geometric Brownian motion without drift. The professor further explains that if we have a sample path B_t, we can obtain a corresponding sample path for S of t by taking the exponential value of B_t at each time.

Next, the video shifts its focus to the definition of integration. Integration is described as the inverse of differentiation, with a somewhat "stupid" definition. The question arises whether integration always exists given f and g. The video then explores the Riemannian sum type description of integration, which involves dividing the interval into very fine pieces and summing the areas of the corresponding boxes. The limit of Riemannian sums is explained as the function approaches infinity as n goes to infinity, providing a more detailed explanation.

An intriguing question regarding the relationship between the Ito integral and the Riemannian sum type description is addressed. The video explains that the Ito integral lacks the property of the Riemannian sum, where the choice of the point within the interval does not matter. Additionally, the video mentions an alternative version of Ito calculus that considers the rightmost point of each interval instead of the leftmost point. This alternative version, while equivalent to Ito calculus, incorporates minus signs instead of plus signs in the second-order term. Ultimately, the video emphasizes that in the real world, decisions regarding time intervals must be made based on the leftmost point, as the future cannot be predicted.

The speaker provides an intuitive explanation and definition of adapted processes in Ito calculus. Adapted processes are characterized by making decisions solely based on past information up until the current time, a fact embedded within the theory itself. The video illustrates this concept using examples such as a stock strategy that solely relies on past stock prices. The relevance of adapted processes in the framework of Ito calculus is highlighted, particularly in situations where decisions can only be made at the leftmost time point and future events remain unknown. The speaker emphasizes the importance of understanding adapted processes and provides several illustrative examples, including the minimum delta t strategy.

The properties of Ito's integral in Ito calculus are discussed in the subsequent section. Firstly, it is highlighted that the Ito integral of an adapted process follows a normal distribution at all times. Secondly, the concept of Ito isometry is introduced, which allows for the computation of variance. Ito isometry states that the expected value of the square of the Ito integral of a process is equal to the integral of the square of the process over time. To aid comprehension, a visual aid is employed to elucidate the notion of Ito isometry.

Continuing the discussion, the video delves into the properties of Ito integrals. It is established that the variance of the Ito integral of an adapted process corresponds to the quadratic variation of the Brownian motion, and this can be computed in a straightforward manner. The concept of martingales in stochastic processes is introduced, elucidating how the presence or absence of a drift term in a stochastic differential equation determines whether the process is a martingale. The speaker also touches upon the applications of martingales in pricing theory, underscoring the significance of comprehending these concepts within the framework of Ito calculus. The viewers are encouraged to engage in basic computation exercises to enhance their familiarity with the subject. Finally, the speaker mentions that the next topic to be covered is the Girsanov theorem.

In the subsequent section, the video delves into the Girsanov theorem, which involves transforming a stochastic process with drift into a process without drift, thereby turning it into a martingale. The Girsanov theorem holds significant importance in pricing theory and finds applications in various gambling problems within discrete stochastic processes. The guest speaker introduces the concept of the probability distribution over paths and Gaussian processes, setting the stage for understanding the theorem. Eventually, a simple formula is provided to represent the Radon-Nikodym derivative, which plays a crucial role in the Girsanov theorem.

Finally, the video concludes by highlighting the broader implications of Itō calculus for stochastic processes. It emphasizes that the probability distribution of a portfolio's value over time can be measured according to a probability distribution that depends on a stock price modeled using Brownian motion with drift. Through the tools and concepts of Itō calculus, this problem can be transformed into a problem involving Brownian motion without drift by computing the expectation in a different probability space. This transformation allows for the conversion of a non-martingale process into a martingale process, which has meaningful interpretations in real-world scenarios.

To fully grasp the intricacies of Itō calculus, the video encourages viewers to practice basic computation exercises and familiarize themselves with the underlying concepts. By doing so, individuals can develop a deeper understanding of stochastic processes, stochastic integration, and the applications of Itō calculus in various fields.

In conclusion, this comprehensive video on Itō calculus covers a wide range of topics. It begins with an exploration of Ito's lemma, the quadratic variation of Brownian motion, and the concept of drift in stochastic processes. It then delves into the evaluation of stochastic processes using Ito's lemma and discusses the integration and Riemannian sum type description of integration. The video also introduces adapted processes, martingales, and the properties of Ito integrals. Finally, it highlights the Girsanov theorem and emphasizes the broader implications of Itō calculus for understanding and modeling stochastic processes.

  • 00:00:00 In this section, the professor continues the discussion on Ito calculus by reviewing Ito's lemma and stating it in a slightly more general form. The professor uses Taylor expansion to analyze how the function f changes when the first and second variables change, and uses Brownian motion to evaluate the information on the function f(t, B_t). The quadratic variation of Brownian motion and the two variables, t and x, are used to explain why Ito calculus has an additional term compared to classical calculus.

  • 00:05:00 In this section, we learn about the second-order term in Taylor expansion by writing it down in terms of partial derivatives. We then focus on the important terms, which are del f over del t dt plus del f over del x dx plus the second-order terms. By rearranging the terms, we get a more sophisticated form of Ito's lemma that includes an additional term. We then see that the terms involving dB_t square and dt times dB_t are insignificant compared to the term involving partial f over partial x second derivative, which survives because it is equal to dt. Ultimately, this leads to a more refined understanding of Ito calculus.

  • 00:10:00 In this section, the professor introduces the concept of a stochastic process with a drift term that results from adding a term to a Brownian motion. This type of process will be the main object of study, where the difference can be written in terms of a drift term and a Brownian motion term. The section then goes on to explain the general form of Ito's lemma, which is a more complicated version of the original form that deviates from it because of the quadratic variation.

  • 00:15:00 In this section, the Ito lemma is used to evaluate stochastic processes. The quadratic variation separates the second derivative term, allowing for complicated terms to be derived. An example involving the function f(x) = x^2 is given and worked out, showing how to compute d of f at B_t. The first partial derivative of f with respect to t is equal to 0, and the partial derivative with respect to x is equal to 2x, with the second derivative equal to 2 at t, x.

  • 00:20:00 In this section, the speaker explains how to calculate d of f at t comma B of t. The formula is partial f over partial t dt plus partial f over partial x dB_t plus 1/2 partial square f over partial x square of dB_t square, which is equal to dt. The speaker shows examples to help understand how to use these formulas and how to plug in the variables. They also explain the difference between the sigma and a variable sigma prime in the formula and when to use them. The formula is used for Brownian Motion as it is the simplest form.

  • 00:25:00 In this section, the professor explains why S_t is not equal to e to the sigma times B of t, which was the proposed model for stock price using Brownian motion. While this expression would give us the expected value of 0, it would also result in a drift. The solution is to subtract the term 1/2 of sigma square times dt from the expression, making the new model S of t equals e to the minus 1 over 2 sigma square t plus sigma of B_t, a geometric Brownian motion without drift. The professor then goes on to explain that if we have a sample path B_t, we can obtain a corresponding sample path for S of t by taking the exponential value of B_t at each time.

  • 00:30:00 In this section, the video discusses the definition of integration. The definition is given as the inverse of differentiation and is described as a "stupid" definition. The question is raised as to whether or not integration always exists given f and g. The video then goes on to discuss the Riemannian sum type description of integration and describes the process of chopping the interval into very fine pieces and summing the areas of the boxes. The limit of Riemannian sums is the limit as n goes to infinity of the function, which is then explained in more detail.

  • 00:35:00 In this section, the professor discusses an interesting question about the Ito integral and its relation to the Riemannian sum type description. He explains that the Ito integral does not have the same property as the Riemannian sum where it doesn't matter which point is taken in the interval. Additionally, he mentions that there is an equivalent version of Ito calculus, but instead of taking the leftmost point of each interval, it takes the rightmost point, which turns out to be equivalent to Ito calculus but with minuses instead of pluses in the second-order term. Ultimately, he explains that in the real world, decisions for time intervals must be made based on the leftmost point because the future cannot be predicted.

  • 00:40:00 In this section, the speaker explains the intuition and definition behind adapted processes in Itō calculus. An adapted process is one that can only make decisions based on past information up until the current time, and this fact is hidden within the theory itself. For example, a stock strategy that makes decisions based only on past stock prices is an adapted process. This is important because Itō calculus works well in this setting, where decisions can only be made at the leftmost time point and cannot see the future. The speaker provides several examples to illustrate adapted processes, including a minimum delta t strategy, and explains their relevance to Itō calculus.

  • 00:45:00 In this section, the properties of Ito's integral in Ito calculus are discussed. The first property is that the Ito integral of an adapted process has normal distribution at all times. The second property is known as Ito isometry and can be used to compute the variance. The Ito isometry states that the expected value of the square of the Ito integral of a process is equal to the integral of the square of the process over time. A visual aid is used to explain the concept of Ito isometry.

  • 00:50:00 In this section, the speaker discusses the properties of Ito integrals. The variance of the Ito integral of an adapted process is equal to the quadratic variation of the Brownian motion, which can be computed in a simple way. The speaker also explains the concept of martingales for stochastic processes and discusses when an Ito integral can be a martingale. The integral is a martingale if the function is adapted to the Brownian motion and is a reasonable function.

  • 00:55:00 In this section of the video, the speaker discusses the concept of martingales in Itō calculus, which are stochastic processes that do not add or subtract value over time but rather add variation. They explain how the presence or absence of a drift term in a stochastic differential equation determines if the process is a martingale. The speaker also touches on applications of martingales in pricing theory and discusses the importance of understanding these concepts in Itō calculus. They encourage viewers to practice with basic computation exercises to become more familiar with the subject. Finally, they mention the Girsanov theorem as the next topic they will cover.

  • 01:00:00 In this section, the topic of changing probability distributions through a change of measure is discussed using Brownian motion as an example. The question is whether it is possible to switch between two probability distributions over paths of Brownian motion, one without drift and the other with drift, by a change of measure. This is equivalent to finding a Radon-Nikodym derivative that makes the two probability distributions equivalent. The concept of changing probability distributions through a change of measure is important in analysis and probability and is used in finding the Radon-Nikodym derivative.

  • 01:05:00 In this section, we learn about probability distributions and how they describe the probability of subsets within a set and how different probability distributions can be equivalent or not based on their probability. We also learn about the Radon-Nikodym derivative, which is a theorem that applies to all probability spaces and describes how one probability measure can be changed to another measure just in terms of multiplication if it is equivalent. Additionally, the section explores Girsanov's theorem, which says that two Brownian motions, with and without drift, are equivalent even though they may appear different at first glance.

  • 01:10:00 In this section, the concept of the Girsanov theorem is discussed, which involves switching a stochastic process into a stochastic process without drift, thereby making it into a martingale. This theorem has significant meaning in pricing theory and applies to a range of gambling problems in discrete stochastic processes. The guest speaker introduces the concept of the probability distribution over paths and Gaussian processes. Eventually, they provide a simple formula to represent the Radon-Nikodym derivative.

  • 01:15:00 In this section, the speaker discusses the Itō Calculus and its implications for stochastic processes. The probability distribution of a portfolio's value over time can be measured according to a probability distribution that depends on a stock price modeled using Brownian motion with drift. This can be transformed into a problem about Brownian motion without drift by computing the expectation in a different probability space. This allows for the transformation of a non-martingale process into a martingale process, which has good physical meanings.
18. Itō Calculus
18. Itō Calculus
  • 2015.01.06
  • www.youtube.com
MIT 18.S096 Topics in Mathematics with Applications in Finance, Fall 2013View the complete course: http://ocw.mit.edu/18-S096F13Instructor: Choongbum LeeThis...
 

19. Black-Scholes Formula, Risk-neutral Valuation



19. Black-Scholes Formula, Risk-neutral Valuation

In this informative video, the Black-Scholes Formula and risk-neutral valuation are thoroughly discussed, providing valuable insights into their practical applications in the field of finance. The video begins by illustrating the concept of risk-neutral pricing through a relatable example of a bookie accepting bets on horse races. By setting odds based on the total bets already placed, the bookie can ensure a riskless profit, regardless of the race outcome. This example serves as a foundation for understanding derivative contracts, which are formal payouts linked to an underlying liquid instrument.

The video proceeds by introducing different types of contracts in finance, including forward contracts, call options, and put options. A forward contract is explained as an agreement between two parties to buy an asset at a predetermined price in the future. Call options act as insurance against the asset's decline, providing the option holder with the right to buy the asset at an agreed price. Conversely, put options allow investors to bet on the asset's decline, granting them the option to sell the asset at a predetermined price. The calculations for the payouts of these contracts are based on specific assumptions such as the current price of the underlying asset and its volatility.

The concept of risk neutrality is then introduced, emphasizing that the price of an option, when the payout is fixed, depends solely on the dynamics and volatility of the stock. Market players' risk preferences do not affect the option price, highlighting the significance of risk-neutral pricing. To illustrate this, a two-period market with no uncertainty is presented, and option prices are calculated using the risk-neutral valuation method, which relies on the absence of real-world probabilities. The example involves borrowing cash to buy stock and setting the forward price to achieve a zero option price.

The video delves into the concept of replicating portfolios, specifically within the context of forward contracts. By taking a short position in a forward contract and combining stock and cash, a replicating portfolio is constructed, ensuring an exact replication of the final payoff. The goal of risk-neutral pricing is to identify replicating portfolios for any given derivative, as the current price of the derivative should match the price of the replicating portfolio.

Further exploration is devoted to pricing a general payoff using the Black-Scholes formula and risk-neutral valuation. A replicating portfolio, consisting of a bond and a certain amount of stock, is introduced as a means to replicate the derivative's performance at maturity, regardless of real-world probabilities. The video introduces the concept of the risk-neutral measure or martingale measure, which exists independently of the real world and plays a fundamental role in pricing derivatives. The dynamics of the underlying stock and the importance of the standard deviation of the Brownian motion are also discussed, with the Black-Scholes formula presented as an extension of the Taylor rule.

The video then delves into solving the partial differential equation for the Black-Scholes model, which relates the current derivative price to its hedging strategy and is applicable to all tradable derivatives based on stock volatility. Replicating portfolio coefficients are determined for any time, enabling the perfect replication of a derivative's performance through the purchase of stock and cash. This hedge carries no risk, allowing traders to collect a fee on the transaction.

Furthermore, the speaker explains how the Black-Scholes equation can be transformed into a heat equation, facilitating the use of numerical methods for pricing derivatives with complex payouts or dynamics. The video highlights the significance of approaching the problem from a risk-neutral perspective to determine the derivative's price as the expected value of the payout discounted by the risk-neutral probability at maturity. The importance of the risk-neutral measure, where the stock's drift equals the interest rate, is emphasized through a binary example.

For more complicated derivative payoffs, such as American payoffs, Monte Carlo simulations or finite difference methods must be employed. The video emphasizes the necessity of these approaches when the assumption of constant volatility, as assumed in the Black-Scholes formula, does not hold true in real-world scenarios.

The video introduces the concept of Co-put parity, which establishes a relationship between the price of a call and the price of a put with the same strike price. By constructing a replicating portfolio consisting of a call, put, and stock, investors can guarantee a specific payout at the end. The speaker further demonstrates how Co-put parity can be utilized to price digital contracts, which have binary payouts based on whether the stock finishes above or below the strike price. This can be achieved by leveraging the idea of a replicating portfolio and the prices of calls.

In the subsequent section, the speaker elaborates on replicating portfolios as a means to hedge complicated derivatives. Through an example involving the purchase of a call with strike price K minus 1/2 and the sale of a call with strike price K plus 1/2, combined to create a payout, the speaker demonstrates how this payout can be enhanced by selling at K minus 1/4 and K plus 1/4, resulting in a payout with half the slope. The video highlights the utilization of small epsilon, buying and selling multiple contracts, and rescaling to a 2:1 ratio to approximate the digital price. The speaker explains how taking derivatives of the Co price by strike results in a ramp and provides insights into real-life practices employed to minimize risk.

Overall, this video provides comprehensive coverage of risk-neutral pricing, including the Black-Scholes formula, Co-put parity, and replicating portfolios. It offers valuable insights into the pricing and hedging of complicated derivatives, while acknowledging the need for more advanced techniques in certain scenarios. By understanding these concepts, individuals can gain a deeper understanding of risk management and its applications in the financial realm.

  • 00:00:00 In this section, the concept of risk-neutral pricing is explained through a simple example of a bookie accepting bets on horse races. The bookie with good knowledge of the horses sets the odds according to real-life probabilities, but if he sets the odds based on the total bets already placed, he can make a riskless profit regardless of which horse wins. The example leads to a discussion on derivative contracts, which are formal payouts connected to an underlying liquid instrument, usually traded on exchanges or over-the-counter. The simpler derivative, a forward contract, is introduced as an agreement by one party to buy an asset from another party at a predetermined price at a specific future time.

  • 00:05:00 In this section, the video discusses different types of contracts in finance, including a forward contract, a call option, and a put option. A forward contract is an obligation to buy an asset for an agreed price in the future. A call option, which is like an insurance against the asset going down, is an option to buy an asset at an agreed price today. The payout for a call option is always positive - maximum of s minus K and zero. On the other hand, a put option is a bet on the asset going down, so the payout is maximum of K minus s and zero. The video also explains how the current price of these contracts can be determined based on certain assumptions, such as the current price of the underlying asset and volatility.

  • 00:10:00 In this section of the video, it is explained how there is no uncertainty in the price of an option when the payout is fixed, and the option price only depends on the dynamics and volatility of the stock. The concept of risk neutrality is introduced, which means that the option price has nothing to do with the risk preferences of market players or counterparties. The video then demonstrates a simple example of a two-period market with no uncertainty, where the option prices are calculated using the risk-neutral valuation method and not the hand-wavy real-world probabilities. The example involves borrowing cash from the bank to buy the stock and setting the forward price such that the option price is zero.

  • 00:15:00 In this section, the concept of a forward contract is explained in terms of a replicating portfolio. The speaker discusses how by taking a short position in a forward contract and using a combination of stock and cash, a replicating portfolio can be created that guarantees the final payoff. The goal of risk-neutral pricing is to find such a replicating portfolio for any given derivative. If a replicating portfolio is created, the current price of the derivative should be the same as the price of the replicating portfolio.

  • 00:20:00 In this section, the speaker discusses the process of pricing a general payoff F using the Black-Scholes formula and risk-neutral valuation. To do so, the speaker introduces the concept of a replicating portfolio consisting of a bond and some amount of stock. They explain that the replicating portfolio is designed to ensure that no matter the real-world probability, the payoff can be replicated exactly at maturity. The speaker goes on to describe the risk-neutral measure or martingale measure, which exists independently of the real world. The value of all derivatives is just expected value of the appeal in such measures. Moreover, the speaker talks about the dynamics of stock underline and the importance of the standard deviation of going Brownian motion being on the scale of square root of T. They mention that the Black-Scholes formula is nothing more than the Taylor rule with one more term because of the standard deviation of the Brownian motion.

  • 00:25:00 In this section, the video explains the process of solving the partial differential equation for the Black-Scholes model. The equation connects the current price of a derivative to its hedging strategy and is applicable to all tradable derivatives as it depends only on the volatility of the stock. The video also describes finding replicating portfolio coefficients (a and b) for any time, allowing for the perfect replication of a derivative's performance through the purchase of stock and cash. This hedge carries no risk, and traders can collect a fee on this transaction.

  • 00:30:00 In this section, the speaker explains that the Black Scholes equation can be transformed into a well-known and understood heat equation, which can be solved through numerical methods for more complex payouts or dynamics. The final payout conditions and boundary conditions for calls and puts are also discussed, and the speaker notes that for simple dynamics and Black Scholes dynamic log-normal dynamics, the equations can be solved exactly. The speaker also highlights the importance of approaching the problem from a risk-neutral position to find the price of the derivative as the expected value of the payout discounted by the risk-neutral probability from the maturity. The risk-neutral measure is such that the drift of the stock is the interest rate, as seen in the binary example.

  • 00:35:00 In this section, the speaker discusses the calculation of the Black-Scholes formula by taking the expected value of the Colin put payout with the log-normal distribution terminal distribution. For more complicated payoffs, such as American payoffs, Monte Carlo simulations or finite differences must be implemented. The speaker also gives an example of the replicating portfolio in action using IBM stock options and explains how put-call parity can be used to price puts when volatility is not constant. The discussion acknowledges that the Black-Scholes formula assumption of constant volatility does not always hold true in the real world, and more complicated methods must be used to price certain options.

  • 00:40:00 In this section, the speaker explains the concept of Co-put parity, which is a relationship between the price of a call and the price of a put for the same strike. By creating a replicating portfolio with a call, put, and stock, an investor can guarantee a payout at the end. The speaker also uses the Co-put parity concept to price a digital contract, which has a binary payout based on whether the stock finishes above or below the strike price. This can be done by using the idea of a replicating portfolio and the prices of calls.

  • 00:45:00 In this section, the speaker explains the concept of replicating portfolios, which are a way to hedge complicated derivatives. They demonstrate this with an example of buying a call with strike K minus 1/2 and selling a call with strike K plus 1/2, and then combining them to create a payout. They show how to improve this payout by selling at K minus 1/4 and K plus 1/4 and combining them, resulting in a payout that is half as much slope. They explain how to approximate the digital price by using small epsilon, buying and selling multiple contracts while rescaling to 2:1. They show how taking derivatives of the Co price by strike results in a ramp and explain how all this is done in real life to reduce risk.
19. Black-Scholes Formula, Risk-neutral Valuation
19. Black-Scholes Formula, Risk-neutral Valuation
  • 2015.01.06
  • www.youtube.com
MIT 18.S096 Topics in Mathematics with Applications in Finance, Fall 2013View the complete course: http://ocw.mit.edu/18-S096F13Instructor: Vasily StrelaThis...
 

20. Option Price and Probability Duality



20. Option Price and Probability Duality

In this section, Dr. Stephen Blythe delves into the relationship between option prices and probability distributions, shedding light on the formula for replicating any derivative product with a given payout function. He emphasizes that call options are fundamental and can be used to replicate any continuous function, making them essential in the financial realm. Blythe also explores the limitations of using call options alone to determine the underlying stochastic process for a stock price, suggesting that alternative bases of functions capable of spanning continuous functions can also be employed.

The video takes a brief intermission as Dr. Blythe shares an intriguing historical anecdote related to the Cambridge Mathematics Tripos. This examination, which tested the mathematical knowledge of notable figures such as Lord Kelvin, John Maynard Keynes, and Karl Pearson, played a significant role in shaping the field of applied mathematics.

Returning to the main topic, Dr. Blythe introduces the concept of option price and probability duality, highlighting the natural duality between these two aspects. He explains that complicated derivative products can be understood as probability distributions, and by switching back and forth between option prices, probabilities, and distributions, they can be discussed in a more accessible manner.

The video proceeds with the introduction of notation for option prices and the explanation of the payout function of a call option. Dr. Blythe constructs a portfolio consisting of two calls and uses limits to find the partial derivative of the call price with respect to the strike price. He also introduces the concept of a call spread, which represents the spread between two calls with a specific payout function.

Dr. Blythe then delves into the duality between option prices and probabilities, focusing on the Fundamental Theorem of Asset Pricing (FTAP). He explains that option prices are expected values of future payouts discounted to the present, and the payout of a digital option is related to the probability of the stock price being greater than a certain level at maturity. Using calculus, he demonstrates that the limit of the call spread tends to the digital option, and the price of the digital option is equal to the partial derivative of the call price with respect to the strike price. The speaker emphasizes the theoretical distinction between the strike price being greater than or greater than or equal to, noting that this distinction has no practical significance.

Next, the speaker delves into the connection between option prices and probability by introducing the Fundamental Theorem of Asset Pricing. This theorem establishes that the price ratio of a derivative to a zero-coupon bond is a martingale with respect to the stock price under the risk-neutral distribution. Dr. Blythe explains how this theorem enables one to go from the probability density to the price of any derivative, allowing for a deeper analysis of the relationship between probability and option pricing.

The video moves on to discuss a method for accessing the density function through a portfolio of options, specifically using the call butterfly strategy. Dr. Blythe explains that a call butterfly spread, constructed by appropriately scaling the difference between two call spreads, can approximate the second derivative needed to obtain the density function. While it may not be feasible to go infinitely small in the real world, trading call butterflies with specific strike prices provides a reasonable approximation to the probability of the underlying asset being within a particular interval.

Building upon this idea, Dr. Blythe explains how the butterfly spread portfolio can be used to access the second derivative and obtain the density function. By taking suitable limits of the butterfly spread, he arrives at the density function f(x), which serves as a model-independent probability measure for the underlying random variable at maturity. This probability measure allows individuals to assess whether they agree with the probability implied by the price of the butterfly and make informed investment decisions. Dr. Blythe emphasizes that these relationships are model-independent, and they hold true regardless of the specific model used for option pricing.

In the following section, Dr. Stephen Blythe, a quantitative finance lecturer, elaborates on the relationship between option prices and probability distributions. He explains that the probability distribution of a security at a particular time is conditioned on its price at the present time, and the martingale condition is with respect to the same price. Dr. Blythe then takes a moment to share an interesting historical tidbit about the Cambridge Mathematics degree, which played a pivotal role in shaping the syllabus for applied math concentrators.

Moving forward, the speaker delves into the Fundamental Theorem of Asset Prices (FTAP). This theorem states that the price-to-zero-coupon-bond ratio is a martingale with respect to the stock price under the risk-neutral distribution. It provides a framework to go from the probability density to the price of any derivative. Dr. Blythe emphasizes that the density can also be derived from call prices, and these two routes are interconnected through the Fundamental Theorem, allowing for a deeper analysis of the relationship between probability and option pricing.

In the subsequent section, Dr. Blythe explains that the prices of all call options for various strike prices play a crucial role in determining the payout for any given derivative function. Call options span all derivative prices, and they are considered European derivative prices. The speaker emphasizes that a derivative function can be replicated by constructing a portfolio of calls, and if the derivative's payout matches a linear combination of call options at maturity, they will have the same value today. This concept is underpinned by the fundamental assumption of finance, known as no arbitrage, which states that if two things will be worth the same amount in the future, they should have the same value today. However, Dr. Blythe acknowledges that this assumption has been challenged in finance since the financial crisis of 2008.

Continuing the discussion, the video presents a thought-provoking economic question about financial markets and arbitrage. When the maturity time (capital T) is set far into the long-term, there is a possibility for the prices of the option and the replicating portfolio to diverge if arbitrage breaks down. This can result in a substantial difference between the two options. Empirical evidence has shown that prices have indeed deviated from one another. Dr. Blythe mentions that long-term investors, such as the Harvard endowment, focus on their annual and five-year returns instead of exploiting the price discrepancy over a 10-year period. He then introduces a mathematical theory that asserts that any continuous function can be replicated by calls without exceptions, in the limit.

The speaker proceeds to discuss the formula for replicating an arbitrary derivative product with a given payout function, denoted as g(x) or g(S) at maturity. The formula provides explicit instructions on replicating the derivative using g(0) zero-coupon bonds, g prime zero of the stock, and a linear combination of call options. Dr. Blythe supports this formula by using expected values and emphasizes the duality between option prices and probabilities, highlighting the significance of call options as the fundamental information that spans the entire spectrum. The formula also poses intriguing questions that warrant further exploration.

Addressing an important aspect, Dr. Blythe explores whether it is possible to determine the stochastic process for a stock price over a given period by knowing all call option prices for various maturities and prices. He argues that the answer is no because the stock price can instantaneously fluctuate over a small time interval, without any constraints on the continuity of the process or mathematical limitations. However, if the stock follows a diffusion process, it becomes feasible to determine the process, resulting in an elegant and practical solution. In reality, one can only know a finite subset of call options, further emphasizing the limitations of fully determining the underlying stochastic process solely based on call option prices.

Dr. Blythe goes on to explain that even with access to a large number of European call option prices, there may still be complex or nonstandard derivative products whose prices cannot be uniquely determined by knowing only those options. He highlights that the set of call options alone does not provide complete information about the underlying stochastic process, even if all call options are known. To overcome this limitation, Dr. Blythe suggests considering alternative bases for the span of all possible payouts. He notes that any arbitrary set of functions capable of spanning a continuous function can be used, although using call options often offers the most elegant approach.

Continuing the discussion, Dr. Blythe elucidates the relationship between call option prices and terminal distributions. He asserts that the terminal distribution can be uniquely determined by the prices of call options. By considering the ratio of Z over theta, a particular risk-neutral density for each stock can be obtained. This highlights the interconnectedness between call option prices and the density of the underlying stock price at maturity, providing valuable insights into model-independent probability measures.

As the section draws to a close, Dr. Blythe reiterates the importance of understanding the connections between option prices and probability distributions in finance. These insights enable analysts and traders to make informed judgments about the implied probabilities reflected in option prices and adjust their investment decisions accordingly. Dr. Blythe emphasizes that these relationships hold true regardless of the specific model used for option pricing, further underscoring their significance in quantitative finance.

In summary, Dr. Stephen Blythe's presentation explores the intricate relationship between option prices and probability distributions. He discusses the rise of financial engineering and the quantitative analyst career path, which was influenced by the cancellation of the Superconducting Super Collider. Dr. Blythe introduces the concept of option price and probability duality, emphasizing the natural duality between option prices and probability distributions. He explores the Fundamental Theorem of Asset Pricing and its implications for understanding option prices and probabilistic approaches in finance. Dr. Blythe provides examples of using butterfly spreads and other trading objects to access density functions and make judgments about implied probabilities. The presentation also includes historical anecdotes about the Cambridge Mathematics Tripos, showcasing notable mathematicians' involvement in finance. Through these discussions, Dr. Blythe sheds light on the deep connections between option prices, probabilities, and the fundamental principles of asset pricing.

  • 00:00:00 This section contains the introduction to a new speaker, Dr. Stephen Blythe, who presents on finance and quantitative finance. Before starting his presentation, he asks the audience a question related to an important event in finance, on which Congress had voted 20 years ago. Congress voted to cut financing to the Superconducting Super Collider underneath Texas just south of Dallas.

  • 00:05:00 In this section, the speaker discusses the impact of the cancellation of the Superconducting Super Collider by Congress, which took place in the 1990s. As a result of this decision, the market for academic physicists collapsed almost overnight, leading to many seeking employment in finance. This event, combined with the growth of the derivatives market and a need to build new theoretical frameworks to solve the problems in the market resulted in the rise of the financial engineering field and the creation of the quantitative analyst career path. The speaker himself started his career in academics and later shifted to finance before returning to academia and is now teaching a course at Harvard on Applied Quantitative Finance. His course covers building theoretical frameworks and using them to solve real-world problems encountered in the financial market.

  • 00:10:00 In this section of the video, the professor introduces the concept of option price and probability duality. He explains that all derivative products can be defined in terms of a payout function, and he defines three assets: call option, zero-coupon bond, and digital option. He notes that the underlying theory of finance is driven by real-world examples, and the probabilistic approach to understanding finance is particularly elegant. The professor emphasizes the natural duality between option prices and probability distributions, stating that these complicated derivatives are in fact just probability distributions, and they can be discussed in an easily understandable way by going back and forth between option prices, probabilities, and distributions.

  • 00:15:00 In this section, the speaker introduces notation for option prices and explains the payout function of a call. They construct a portfolio consisting of two calls and use limits to find the partial derivative of the call price with respect to K. The speaker also mentions that the call spread is the spread between two calls with a particular payout function.

  • 00:20:00 In this section, the speaker explains the duality between option prices and probabilities, based on the Fundamental Theorem of Asset Pricing (FTAP). Specifically, the speaker assumes that prices today are expected values of future payouts discounted to the present, and that the payout of a digital option is related to the probability of a stock being greater than a certain price at maturity. The speaker uses calculus to show that the limit of the call spread tends to the digital, and that the price of the digital equals the partial derivative with respect to the strike price of the call price. The speaker also discusses the importance of defining whether the strike price is greater than or greater than or equal to, noting that this theoretical distinction does not matter in practice.

  • 00:25:00 In this section, the speaker discusses the connection between option prices and probability by introducing the Fundamental Theorem Asset Pricing. The expected value under the risk-neutral distribution is taken out in order to come up with this pricing formula, which strictly holds true. Martingales play a crucial role in this formalization of asset pricing, and it took a while for the approach to be embraced on the trade floor despite the underlying theory always being present. By equating two prices for the digital option, the speaker establishes a linkage between call prices and the density of the underlying stock price at capital T.

  • 00:30:00 In this section, the speaker explains a way to access density function through a portfolio of options by considering the difference between two call spreads appropriately scaled, which is known as a call butterfly. This traded object can help to approximate the second derivative that leads to the density function. Although it is not possible to go infinitely small in the real world, we can trade a 150, 160, or 170 call butterfly, which is a reasonable approximation to the probability of being in that interval.

  • 00:35:00 In this section, Blythe explains how the butterfly spread portfolio can be used to access the second derivative via the price of the butterfly. By taking limits of the butterfly spread at suitable scales, Blythe obtains a density function f(x), which can be used as a model-independent probability measure of the underlying random variable being at K at maturity. Based on this probability measure, people can make a judgement on whether they agree with the probability implied by the price of the butterfly and buy it accordingly. Blythe notes that these relationships are model-independent, and will hold irrespective of the model for the option prices.

  • 00:40:00 In this section, Stephen Blythe, a lecturer on quantitative finance, discusses the relationship between option prices and probability distributions. He explains that the probability distribution of a security at a certain time is conditional on the price of that security at the present time and that the martingale condition is with respect to the same price as well. Blythe also takes a quick break from the discussion and shares a historical anecdote about the Cambridge Mathematics degree and how it generated the entire syllabus for applied math concentrators.

  • 00:45:00 In this section, the speaker shares some interesting historical facts about the Cambridge Mathematics Tripos, which is an examination that was held in Cambridge to test mathematical knowledge. He talks about the achievements of notable people who took the exam, including Lord Kelvin, John Maynard Keynes, and Karl Pearson. The speaker then transitions to discussing the relationship between option prices and probabilities. He explains that the Fundamental Theorem of Asset Pricing asserts that option prices are the discounted expected payout at maturity, and if this theorem holds, it is possible to go from probability to option price.

  • 00:50:00 In this section, the speaker discusses the Fundamental Theorem of Asset Prices (FTAP), which states that the ratio of the price to the zero coupon bond is a martingale with respect to the stock price under the risk-neutral distribution. This theorem allows for a way to go from the probability density to the price of any derivative. The speaker notes that the density can also be derived from the call prices, and these two routes are interconnected through the Fundamental Theorem. This allows for a way to analyze and understand the relationship between probability and option pricing.

  • 00:55:00 In this section, the speaker explains that knowing the prices of all call options for all strike prices determines the derivative payout for any given function. Call options span all derivative prices and are European derivative prices. A function determines the derivative, which can be replicated by a portfolio of calls, and if the derivative payout is the same as a linear combination of call options at maturity then they're both worth the same today. The fundamental assumption of finance, no arbitrage, underlines this concept and dictates that if two things will be worth a dollar in a year, they'll be worth the same today. However, since 2008 this assumption has been challenged in finance.

  • 01:00:00 In this section, the video presents a deep economic question about financial markets and arbitrage. When capital T is set far in the long-term, there is nothing that stops the prices of the option and the replicating portfolio from moving away from each other if arbitrage breaks down, which can lead to a very big difference between the two options. Empirically, the prices have been shown to move away from each other. The speaker mentions that the Harvard endowment is a long-term investor and explores why it doesn't buy the cheaper option holding it for 10 years to make money but states that it's because they care about their annual and five-year returns. Additionally, the speaker presents a mathematical theory that states that any continuous function must be able to be replicated by calls, with no exceptions, in the limit.

  • 01:05:00 In this section, the speaker discusses the formula for replicating an arbitrary derivative product with payout g of x or g of S at maturity. The formula explicitly explains how to replicate by g(0) zero-coupon bonds, g prime zero of stock, and a linear combination of calls. The speaker proves this formula by taking expected values and discusses the duality of option prices and probabilities in different ways, highlighting the importance of call options as the primitive information and how they span everything. The formula also raises interesting questions for further discussion.

  • 01:10:00 In this section, the speaker discusses whether one can determine the stochastic process for a stock price over a period by knowing all call option prices for all maturities and all prices. The speaker argues that the answer is no because it is possible for the stock to flip instantaneously over a small time interval, without constraint on the continuity of the process or mathematical constraints. However, the process can be determined if the stock has a diffusion process, and the result is elegant and practical. The practical implication is that in reality, one will know only a finite subset of call options.

  • 01:15:00 In this section, Stephen Blythe explains that even if a trader has access to a large number of European call option prices, there may be some complex or nonstandard derivative products whose price is not determined uniquely simply by knowing those options. This is because the set of call options does not determine the underlying stochastic process, even if one knows all of them. Blythe also discusses the suggestion of selecting another basis for the span of all possible payouts instead of call options, and explains that any arbitrary basis of functions that can span a continuous function can work, but using call options is often the most elegant method for this purpose.

  • 01:20:00 In this section, Stephen Blythe explains the relationship between call option prices and terminal distribution, whereby the latter is uniquely determined by the former. He also notes that taking Z over theta results in a particular risk-neutral density for each stock.
20. Option Price and Probability Duality
20. Option Price and Probability Duality
  • 2015.01.06
  • www.youtube.com
MIT 18.S096 Topics in Mathematics with Applications in Finance, Fall 2013View the complete course: http://ocw.mit.edu/18-S096F13Instructor: Stephen BlytheThi...
 

21. Stochastic Differential Equations



21. Stochastic Differential Equations

This video provides an in-depth exploration of various methods for solving stochastic differential equations (SDEs). The professor begins by highlighting the challenge of finding a stochastic process that satisfies a given equation. However, they reassure the audience that, under certain technical conditions, there exists a unique solution with specified initial conditions. The lecturer introduces the finite difference method, Monte Carlo simulation, and tree method as effective approaches to solve SDEs.

The professor delves into the technical conditions necessary for solving SDEs and emphasizes that these conditions typically hold, making it easier to find solutions. They demonstrate a practical example of solving a simple SDE using an exponential form and applying a guessing approach along with relevant formulas. Additionally, the speaker illustrates how to analyze the components of an SDE to backtrack and find the corresponding function. They introduce the Ornstein-Uhlenbeck process as an example of a mean-reverting stochastic process, shedding light on its drift and noise terms.

Moving on to specific solution methods, the professor explains how the finite difference method, commonly used for ordinary and partial differential equations, can be adapted to tackle SDEs. They describe the process of breaking down the SDE into small intervals and approximating the solution using Taylor's formula. The lecturer also discusses the challenges posed by the inherent uncertainty of Brownian motion in the finite difference method and presents a solution involving a fixed sample Brownian motion path.

Next, the lecturer explores the Monte Carlo simulation method for solving SDEs. They emphasize the need to draw numerous samples from a probability distribution, enabling the computation of X(0) for each sample and obtaining a probability distribution for X(1). The speaker notes that unlike the finite difference method, Monte Carlo simulation can be employed once the Brownian motion has been fixed.

The tree method is introduced as another numerical solution approach for SDEs, involving the use of simple random walks as approximations to draw samples from Brownian motions. By computing function values on a probability distribution, an approximate distribution of the Brownian motion can be realized. The lecturer highlights the importance of choosing an appropriate step size (h) to balance accuracy and computation time, as the approximation quality deteriorates with smaller step sizes.

During the lecture, the professor and students engage in discussions regarding the numerical methods for solving SDEs, particularly focusing on tree methods for path-dependent derivatives. The heat equation is also mentioned, which models the distribution of heat over time in an insulated, infinite bar. The heat equation has a closed-form solution and is well understood, providing valuable insights into solving SDEs. Its relationship to the normal distribution is explored, highlighting how heat distribution corresponds to a multitude of simultaneous Brownian motions.

The video concludes with the professor summarizing the topics covered and mentioning that the final project involves carrying out the details of solving SDEs. The speaker also indicates that the upcoming lectures will focus on practical applications of the material presented so far, further enriching the understanding of SDEs in real-world scenarios.

  • 00:00:00 In this section, the professor discusses the concept of finding a stochastic process that satisfies a given equation, and notes that these types of equations can be challenging to solve. However, as long as the functions involved are reasonable, there does exist a unique solution with given initial conditions. The professor also mentions technical conditions that must be met for the functions to be considered reasonable.

  • 00:05:00 In this section, the technical conditions for stochastic differential equations are explained. While the conditions may seem daunting, they will usually hold, making it easier to find a solution for the differential equation. Professor Li also provides an example of how to solve a simple stochastic differential equation in exponential form using a guessing approach and various formulas. The final step in solving stochastic differential equations is to check that all variables match, as shown in the expression given by the audience member.

  • 00:10:00 In this section, the speaker shows an example of solving a stochastic differential equation by analyzing its components and using them to backtrack to the function. He notes that this approach may not be better than guessing the answer, but it can be useful when an explicit solution is not known or when there is no reasonable guess. He then introduces the Ornstein-Uhlenbeck process, which is used to model mean-reverting stochastic processes, such as the behavior of gases. The process has a drift term that is proportional to the current value and a noise term that is independent of the value.

  • 00:15:00 In this section, the speaker discusses how to solve a stochastic differential equation by coming up with a guess for a test function and following a similar analysis to that used for ordinary or partial differential equations. The speaker shares that for this process, the initial guess will be a(0) equals 1, although they admit that there is no real intuition or guideline for arriving at this guess. By using the chain rule to differentiate, they derive a prime of t equation and rewrite it as X(t) divided by a(t), plus a(t) times the differential of another equation. The two terms cancel, and they conclude that a(t) must be e to the minus alpha t. Plugging this into the equation yields b(t), and so X of t is e to the minus alpha*t of x of 0 plus 0 to t sigma e to the alpha*s.

  • 00:20:00 In this section, the focus is on the methods used for solving stochastic differential equations. The speaker indicates that the finite difference method, Monte Carlo simulation, or tree method are typically used when attempting to solve these equations. Although finite difference methods are usually used for solving ODE and PDE, they can be adapted to work with stochastic differential equations. The method is illustrated with an example where a given stochastic differential equation is chopped up into tiny pieces, and the solution is approximated using Taylor's formula.

  • 00:25:00 In this section, the speaker discusses the finite difference method for differential equations. They explain that the method involves taking a small value, h, and repeating the equation 1 over 100 times until reaching the final value. The same method can be applied to two-variable functions by using a Taylor expansion to fill out the grid layer by layer. However, when it comes to stochastic differential equations, the finite difference method becomes more complicated as each value could have come from multiple possibilities. This can be solved by taking a sample Brownian motion path and using the finite difference method with that fixed path.

  • 00:30:00 In this section, the speaker explains how to numerically solve a stochastic differential equation using Monte Carlo simulation. To do so, it is necessary to draw a lot of samples from some probability distribution. By doing this and computing the value of X(0) for each sample, it is possible to obtain a probability distribution for X of 1. The speaker notes that a finite difference method cannot be used for stochastic differential equations due to the underlying uncertainty from Brownian motion, but this method can be used once the Brownian motion has been fixed.

  • 00:35:00 In this section, the professor explains the tree method for drawing a sample from Brownian motions using simple random walk as approximations. By computing the values of a function on a probability distribution, the tree method allows an approximate distribution of Brownian motion to be realized. It is important to note that the approximation for intermediate values becomes progressively worse as h becomes smaller, requiring the right h to balance accuracy and computation time.

  • 00:40:00 In this section, the professor and students discuss different methods for solving stochastic differential equations numerically, particularly focusing on tree methods for path-dependent derivatives. They also touch on the heat equation, which is a partial differential equation that models the distribution of heat over time in a perfectly insulated, infinite bar. The equation has a closed-form solution and is well understood.

  • 00:45:00 In this section, the concept of linearity is introduced, which states that if a family of functions all satisfy a specific equation, then the integration of these solutions also satisfies the same equation, as long as reasonable functions are used. This is useful because it allows for solving initial conditions, such as a Dirac delta function. By using this principle and superimposing a lot of solutions for a Dirac delta initial condition, a solution for arbitrary initial conditions can be obtained.

  • 00:50:00 In this section, the video discusses the heat equation and its relationship to the normal distribution. The heat equation models a perfectly insulated system in which heat is initially concentrated at one point and then becomes distributed over time according to the normal distribution. This can be thought of as a bunch of Brownian motions happening simultaneously. The solution to the heat equation is given by integration, allowing for an explicit solution at time t for all x. This closed-form solution can then be used to solve the Black-Scholes equation.

  • 00:55:00 In this section, the speaker concludes the lecture on stochastic differential equations by stating that the final project is to carry out all the details and explaining how the Black-Scholes equation will change to a heat equation. The speaker also mentions that the upcoming lectures will focus on applications of the material covered so far.
21. Stochastic Differential Equations
21. Stochastic Differential Equations
  • 2015.01.06
  • www.youtube.com
MIT 18.S096 Topics in Mathematics with Applications in Finance, Fall 2013View the complete course: http://ocw.mit.edu/18-S096F13Instructor: Choongbum LeeThis...
 

23. Quanto Credit Hedging



23. Quanto Credit Hedging

In this comprehensive lecture, Professor Stefan Andreev, a renowned expert from Morgan Stanley, dives into the fascinating world of pricing and hedging complex financial instruments in the realms of foreign exchange, interest rates, and credit. The primary focus of the discussion is on the concept of credit hedging, which involves mitigating the risks associated with credit exposure.

Professor Andreev begins by elucidating the process of replicating the payoff of a complex financial product using the known prices of other instruments and employing sophisticated mathematical techniques to derive the price of the complex product. He emphasizes the significance of incorporating jump processes, which are stochastic phenomena that capture sudden and significant price movements, to effectively describe the behavior of prices linked to sovereign defaults in emerging markets. One notable example explored is the impact of the Greek default situation on the Euro currency.

The lecture delves into various aspects of the theoretical pricing of bonds, considering mathematical models that facilitate hedging against defaults and foreign exchange (FX) forwards. The basic credit model introduced involves utilizing Poisson processes characterized by an intensity rate, denoted as 'h,' and a compensator term to achieve a constant no-arbitrage condition. This model provides a framework to analyze and price bonds while accounting for credit risks.

The video also delves into the Quanto Credit Hedging strategy, which entails employing a portfolio consisting of both dollar and euro bonds to hedge credit risk. The valuation of these bonds relies on factors such as the FX rate and the expected payoff. The strategy requires dynamic rebalancing as time progresses due to changes in the probability of default and jump sizes. Additionally, the lecture explores the extension of the model to incorporate non-zero recoveries, which enhances the pricing and hedging capabilities for credit contingent contracts and credit default swaps denominated in foreign currencies.

The speaker acknowledges the complexities that arise when utilizing Ito's lemma, a mathematical tool to handle stochastic differential equations, particularly in scenarios involving both diffusive and jump processes. Monte Carlo simulations are suggested as a means to verify the accuracy of the derived results. Real-life models are noted to be more intricate, often incorporating stochastic interest rates and hazard rates that can be correlated with other factors like FX. The lecture highlights the existence of a wide range of models designed for various markets, with complexity and required speed determining their suitability.

Estimating hazard rates (h) and jump sizes (J) is discussed, with the speaker explaining how bond prices can be used to estimate these parameters. Recovery estimates from default are explored, with conventions typically setting fixed rates at 25% for sovereign nations and 40% for corporates. However, recovery rates can vary significantly depending on the specific circumstances. Investors usually make assumptions about recovery rates, and estimation can be influenced by macroeconomic factors. The lecture concludes by addressing the estimation of hazard curves using benchmark bond prices and replicating processes to estimate prices in scenarios involving multiple currencies.

Throughout the lecture, Professor Andreev provides numerous examples, equations, and insights to deepen the audience's understanding of pricing and hedging complex financial products. The topics covered range from statistical analysis and predictions to the intricacies of various mathematical models, ultimately providing valuable knowledge for individuals interested in this domain.

Professor Stefan Andreev introduces the concept of pricing bonds using mathematical models and the importance of hedging against defaults and foreign exchange fluctuations. He demonstrates the process through examples and emphasizes the need for accurate estimation of hazard rates and recovery rates.

The lecture explores the Quanto Credit Hedging strategy, which involves constructing a portfolio of dollar and euro bonds to hedge against credit risk. The value of the bonds is determined by considering the FX rate and the expected payoff. The model takes into account the probability of default and the jump size, requiring dynamic portfolio rebalancing as time progresses.

The video delves into deriving the prices of dollar and euro bonds for the Quanto Credit Hedging strategy. The speaker explains the calculations involved in determining the probability of tau being greater than T or less than T and the expected value of S_T. By analyzing the ratios of the notionals of the two bonds, a hedged portfolio strategy is proposed.

The speaker further extends the Quanto credit hedging model to incorporate non-zero recoveries. This extension allows traders to price credit contingent contracts and credit default swaps denominated in foreign currency, providing more accurate hedge ratios. Although calibration becomes more challenging with the extended model, Professor Andreev highlights its significance in understanding complex mathematical models.

The video also discusses the complications that arise when using Ito's lemma to account for both diffusive and jump processes. The speaker suggests employing Monte Carlo simulations to validate the accuracy of the results obtained from the calculations. Real-life models are acknowledged as more intricate, often incorporating stochastic interest rates and hazard rates correlated with other factors such as foreign exchange.

Furthermore, the lecture emphasizes that recovery estimates from default vary and are typically set at conventions such as 25% for sovereign nations and 40% for corporates. However, these values are not fixed and may differ depending on the specific corporation. Estimating recovery rates involves considering macroeconomic factors, although it remains a subjective concept where investors usually rely on assumptions.

To estimate hazard rates (h) and J, Professor Andreev explains the use of bond prices. By taking benchmark bonds with known prices, hazard curves can be constructed. Replicating these benchmark bonds helps estimate the h value for each bond price. When multiple currencies are involved, the process becomes more complex, requiring replication of multiple processes to estimate prices. In the case of bonds paying coupons, all coupon payments must be considered and their expectation calculated.

Overall, Professor Stefan Andreev's lecture provides valuable insights into the pricing and hedging of complex products in foreign exchange, interest rates, and credit. Through detailed explanations, examples, and mathematical models, he sheds light on the intricacies of credit hedging, bond pricing, and the estimation of hazard rates and recoveries.

  • 00:00:00 In this section of the lecture, Professor Stefan Andreev from Morgan Stanley explains that there are two key areas in finance for quantitative skills: statistics and predictions, and pricing and hedging of complex instruments. Professor Andreev focuses on pricing and hedging of complex products in the foreign exchange, interest rates, and credit areas. He describes the process of replicating the payoff of a complex product using other products whose prices are known and using mathematical techniques to derive the price of the complex product. He also highlights the importance of using jump processes to describe certain price behaviors related to sovereign defaults in emerging markets, including the Euro currency during the Greek default situation.

  • 00:05:00 In this section, we learn about foreign exchange and how it is mathematically described as the price of a unit of foreign currency in dollars. The spot FX rate is denoted by S and is a current rate of exchange. FX forwards are contracts that allow for the locking in of an effective dollar interest rate. FX forwards are connected to foreign interest rates, which can be inferred by knowing the FX forwards. The concept of arbitrage is also discussed, explaining how it can be used to make a profit when interest rates in one currency are different from those in another. Additionally, the definition of risk-free rates and their use in the FX process is presented.

  • 00:10:00 In this section, the speaker discusses the process for the FX currency and the constraints on its stochastic differential equation to have a no-arbitrage condition, which is essentially that the drifts of the process have to be the difference in interest rates. The arbitrage conditions from before apply, which means that the forward rate has to be the spot rate times the interest rate differential. The speaker also introduces the Black-Scholes FX model, which is the standard basic dynamic FX model used in industry, and discusses the interesting properties of FX and the fact that its exchange rate cannot be negative. However, it can get very big and has no upper bound, making the distribution skewed.

  • 00:15:00 In this section, the speaker introduces a game where assumptions are made to simplify the system and participants are asked to choose between two payoffs, A and B. Both payoffs are symmetric with regards to wagered amounts and participants either gain or lose the same amount, but one is preferred over the other. The speaker finds out that nobody wants to play the game, but providing scenarios where the exchange rates are either 1.25 or 0.75, he illustrates that Bet A is $25 better than Bet B. The speaker concludes that Bet A is the better deal since the value of the bet's units depends on whether you win or lose.

  • 00:20:00 In this section, the presenter explains the concept of credit FX quanto models, using Italy's bonds issued in both dollars and euros as an example. Italy issues both euro and dollar bonds because it needs to reach as many investors as possible. However, both types of bonds cross-default; meaning if Italy defaults on one bond, all of its bonds default together, including the euros and the dollar bonds. The credit spread, which is the measure of how risky Italy is, is not the same in both currencies, and it determines which currency Italy prefers to issue bonds in and which currency investors prefer to buy bonds in. The presenter asks the audience which currency they think has a higher credit spread and explains that they need to come up with a strategy to replicate one bond with the other to compare the two.

  • 00:25:00 In this section, the speaker discusses how to analyze payoffs of instruments and write a model for FX and credit to price bonds. The example given is two zero-coupon bonds, one in dollars and one in euros, with the same maturity that pay 100 on maturity. They use an arbitrage strategy to sell 100 times Ft dollar bonds and buy 100 euro bonds, entering into an FX forward contract for 100,000 euros for maturity T at zero cost. The FX forward hedges the proceeds, and they can exchange the proceeds of the bonds to obtain the euro bonds. By computing a model that explains the difference, they find that USD bond spreads are actually lower in the marketplace, and bonds are either performing or non-performing and in default.

  • 00:30:00 In this section, the concept of hedging using FX forwards and bonds is explored. The scenario of two bonds, one issued in dollars and another one issued in euros with the same nominal value is discussed. Theoretically, if the exchange rate is set properly, the two bonds should have the same value at maturity, and the investor can't make a profit or loss. However, when there is a default, the situation changes, and the bonds may not have equal values, and it's hard to hedge using FX forwards and bonds only. The case of the 2001 default of Argentina is presented to show how it looks when the FX forward is left naked. Mathematical models are introduced as a solution to help hedge using the replication strategy, and further explanations are given regarding pricing without hedging and vice versa.

  • 00:35:00 In this section, the speaker explains the basic credit model to model default, which involves defining default events as a Poisson process with an intensity rate, h. Assuming a constant hazard rate and a zero interest rate environment, the speaker explains the FX dynamics in the model, which includes a jump process denoted by J*dN, where J is the percent devaluation of FX and dN is the Poisson process. The goal is to achieve a constant no-arbitrage condition where the expected value of the FX rate equals the initial value, which is done by setting the drift, mu, equal to h times e to the power of J (the compensator term).

  • 00:40:00 In this section, the speaker explains how to derive the form of the compensator term of the Poisson process and how to check if this form satisfies the condition for the expectation. The formula for d of log S_t is given and integrated with the help of an indicator function and J dN_t. The speaker then divides the possibilities for tau greater or less than capital T and shows how J is a constant, and therefore the integral is J times N of t. The speaker mentions that all the derivations are posted on the notes for reference purposes.

  • 00:45:00 In this section, the speaker explains how to compute the expectation of S_T and integrate over the probability distribution of tau. He starts by erasing the top line of the previous equation and shows that log of S_T over S_0 equals h times tau times 1 minus e to the J if tau is less than T and h times capital T times 1 minus e to the J times indicator function of tau bigger than or equal to T if tau is bigger than T. He then exponentiates both sides and writes the integral from 0 to infinity of S of tau times phi(0, tau) d tau to calculate the expectation of S_T. He splits the integral into two parts and explains the first term from 0 to capital T and the second term from capital T to infinity for tau.

  • 00:50:00 In this section, the speaker explains the process of working with jump processes and taking expectations. He demonstrates how his drift guess initially makes the expectation zero. The dynamics for log of S with jump on default are defined and the probability density is calculated. The speaker uses Ito's lemma to derive the dynamics of S and explains how the process for S can be found from the process for log of S. The final result for S is shown to be h times 1 minus e to the J, tau is less than T, dT, plus e to the J minus 1, J minus 1, dN, dN_t.

  • 00:55:00 In this section, the speaker discusses the pricing exercise for two zero-coupon bonds with different currencies using the FX rate model and credit model. The pricing is achieved through standard pricing theory, where the price at time T is equal to the expectation of a price at time t. The speaker calculates the probability of tau bigger than T and uses cumulative probability function to determine the bond price in dollars. By comparing the ratios of the notionals of two bonds, the speaker suggests a hedge portfolio for the two bonds.

  • 01:00:00 In this section, the speaker explains how to hedge a credit risk by constructing a portfolio consisting of a dollar bond and a euro bond with the same payoff, but where the payoff for the euro bond is in euros instead of dollars. The speaker demonstrates how to calculate the expectation of the euro bond payoff in dollars using the indicator function and then constructs a portfolio at time t=0 that costs zero by selling one dollar bond and buying a certain amount of euro bonds. The speaker then explains how to check if the portfolio provides the same price in both the case of default and no default, which would indicate a hedged portfolio.

  • 01:05:00 In this section, the speaker discusses the hedging strategy for credit risk using the example of dollar and euro bonds. The value of the dollar bond is calculated using a formula involving the FX rate while the value of the euro bond is calculated using the number of bonds and the FX rate. The hedging strategy is dynamic and depends on the probability of default and the jump size. Rebalancing of the portfolio is required continuously, especially as time moves forward and there are changes in the probability of default. The speaker also delves into the complexity of bond pricing when recovery is bigger than zero.

  • 01:10:00 In this section, the speaker explains how to derive the dollar bond price and the euro bond price, taking into account the FX rate that jumps on default. The dollar bond price is derived by calculating the probability of tau being bigger than T or less than T, while the euro bond price is derived by dividing the price at time 0 of the euro bond by S_0 and calculating the expected value of S of T by T. The determination of S of T, the zero-coupon bond price, is broken down into several parts, which are carefully explained by the speaker.

  • 01:15:00 In this section, the video talks about how to do an expectation for the Quanto Credit Hedging. To do this expectation, the speaker explains that you have to do an integral over the interval from 0 to infinity of the probability density. It looks similar to the previous calculation, and this time there are two terms since tau is less than T. The first term is e to the hT and the second term is R times the expectation of tau, which the speaker goes into detail about how to calculate this term.

  • 01:20:00 In this section, the speaker explains how to extend the Quanto credit hedging model to include non-zero recoveries. He suggests that one could take the model even further by adding another term, and explains that his team at Morgan Stanley is already working on such a model. The extended model will allow traders to price credit contingent contracts and credit default swaps denominated in foreign currency, and provides better hedge ratios. He notes that the extended model makes calibration more difficult, but finds the project a worthwhile exercise for students seeking to understand complex mathematical models.

  • 01:25:00 In this section, the speaker discusses the complications that arise when using Ito's lemma to account for both diffusive and jump processes. They suggest using a Monte Carlo simulation to verify the accuracy of the results obtained from the calculations. The speaker also explains that real-life models are more complex and often incorporate stochastic interest rates and hazard rates, which can be correlated with other factors such as FX. They note that there is a range of models that are implemented for various markets, depending on their complexity and required speed. Finally, the speaker answers a question about which of the initial Italian bets was better and explains that they can only answer the question within their model, taking factors such as supply and demand, and liquidity in euros and dollars into account.

  • 01:30:00 In this section, the speaker discusses credit hedging in the case of investing in euros versus dollars and the effects of default on currency values. The expected value of currency is determined by interest rate differentials, and investors would prefer to buy bonds in the currency that would appreciate if default does not occur since they only get paid if default does not happen. Recovery estimates from default vary and are typically fixed at 25% for sovereign nations and 40% for corporates, but these numbers are just conventions, and recovery varies by corporation. Recovery can be estimated using macroeconomic factors, but it is a fuzzy concept, and investors usually make assumptions about it.

  • 01:35:00 In this section, Stefan Andreev explains how to estimate hazard rate (h) and J by using the bond prices. If the recovery rate is fixed, the bond price can be converted into the hazard rates. Stefan suggests that by taking some benchmark bonds with known prices, the hazard curves can be created. To price derivatives, these benchmark bonds can be used by replicating them and estimating the h value for each bond price. If multiple currencies are involved, it becomes tricky where we have to replicate multiple processes to estimate the prices. To include bonds that are paying coupons, we need to write down all the coupon payments and then take their expectation.
23. Quanto Credit Hedging
23. Quanto Credit Hedging
  • 2015.01.06
  • www.youtube.com
MIT 18.S096 Topics in Mathematics with Applications in Finance, Fall 2013View the complete course: http://ocw.mit.edu/18-S096F13Instructor: Stefan AndreevThi...
 

24. HJM Model for Interest Rates and Credit



24. HJM Model for Interest Rates and Credit

In this section, Denis Gorokhov, a financial expert at Morgan Stanley, discusses the HJM model (Heath-Jarrow-Morton) and its application in pricing and hedging exotic financial products, including credit derivatives and double range accruals. The HJM model is a powerful framework used by major banks like Morgan Stanley and Goldman Sachs to trade various types of exotic derivatives efficiently and meet client demands.

Gorokhov compares the HJM model to theoretical physics, highlighting that it offers both solvable models and complex problems. It enables banks to accurately price a wide range of exotic derivatives numerically. He emphasizes the volatility and randomness of markets and how they can impact derivative traders who require effective hedging strategies.

The lecture introduces the concept of starting a derivative pricing model from a stochastic process and uses log-normal dynamics as a fundamental model for stock price movements. The model incorporates a deterministic component called drift and a random component called diffusion, which captures the impact of randomness on stock prices. Using this model, the Black-Scholes formula can be derived, allowing for the calculation of the probability distribution for the stock at a given time and enabling the pricing of derivatives with a payoff dependent on the stock price.

The HJM model is then discussed specifically in the context of interest rates and credit. The lecturer explains the dynamics of interest rates as a log-normal process, ensuring that stock prices cannot be negative. Ito's lemma, a cornerstone of derivative pricing theory in the HJM model, is introduced and its derivation is explained. Ito's lemma helps differentiate the function of a stochastic variable, facilitating the modeling and pricing of derivatives.

The Green's function of the equation used in the HJM model is highlighted as being similar to the probability distribution function for stock prices. In the risk-neutral space, where the drift of all assets is the interest rate, dynamic hedging becomes crucial, with only the volatility parameter affecting option pricing. Monte Carlo simulations are employed to simulate stock prices and other financial variables, enabling the calculation of derivative prices. This simulation method is a powerful tool that applies to various fields within finance.

The lecture also delves into the concept of discount factors and their significance in finance. Forward rates, which serve as a convenient parametrization for non-increasing discount factors, are explained. The yield curve, representing the relationship between different maturities and the associated interest rates, is discussed. Typically, the yield curve is upward sloping, indicating higher interest rates for longer-term borrowing.

The swap market is introduced as a provider of fixed payment values for different maturities. By summing these payments, the swap rate can be determined. This rate helps understand the present value of future payments or the worth of investing today to cover future fixed rate payments.

In conclusion, the lecture emphasizes the importance of risk-neutral pricing in evaluating the value of exotic derivatives and securities issued by large banks. It highlights the role of the HJM model, Monte Carlo simulations, and the understanding of interest rates, credit, and discount factors in pricing and hedging these complex financial instruments.

  • 00:00:00 In this section, Denis Gorokhov, who works at Morgan Stanley, discusses the HJM model that was discovered by three individuals in the beginning of the 1990s. The HJM model is a general framework for pricing derivatives that can be used for interest rates and credit. This model allows big banks like Morgan Stanley and Goldman to trade thousands of different types of exotic derivatives quickly and respond to the demand of the clients. Gorokhov compares the HJM model to theoretical physics, where there are beautiful models, like a solvable model but there are also complex problems. It is a similar framework, and it allows banks to price all kinds of exotic derivatives accurately numerically.

  • 00:05:00 In this section, the professor and Denis Gorokhov discuss the volatility and randomness of the markets and how it can affect derivative traders who need to be hedged. They introduce the concept of starting a derivative pricing model from a stochastic process and use the log-normal dynamics as a basic model for stock price movements. The model includes a drift, which is a deterministic part of the stock price dynamics, and diffusion, which is the impact of randomness on the stock price. Using this model, one can derive the Black-Scholes formula, which calculates the probability distribution for the stock at a given time and enables the pricing of derivatives with a payoff dependent on the stock price.

  • 00:10:00 In this section of the video, the lecturer discusses the HJM model for interest rates and credit. They introduce the concept of a stochastic process and how it follows a drift and volatility term. They show the solution for the equation and how it is straightforward by integrating. The lecturer explains how the dynamics are assumed to be log-normal to avoid negative prices for the stock and how this helps approximate the probability distribution for the standard variable. They introduce Ito's lemma and explain how it was obtained, which helps differentiate the function of a stochastic variable. Finally, they show the formula for the model and how it is very similar to the formula for the previous equation, with the only difference being the value of alpha.

  • 00:15:00 In this section, the speaker explains the importance of the HJM model in understanding stock dynamics and Black-Scholes formalism. He emphasizes the fundamental financial restriction that stock cannot be a liability and can't go negative. Through the Black-Scholes formalism and Monte Carlo method, the speaker explains how to calculate the change in portfolio and obtain the risk-free return, which leads to the Black-Scholes differential equation for the stock. The equation is fundamental and elegant, dropping out the drift mu, and depending on the interest rate. The speaker attributes this crucial fact to hedging, where you have a position in an option and an opposite position in underlying stocks.

  • 00:20:00 In this section, the speaker discusses Ito's lemma, a concept from stochastic calculus that plays a crucial role in the HJM model for interest rates and credit. The speaker first notes that the HJM model eliminates drift and risk from the equation, allowing for easy pricing of options. However, understanding the derivation of Ito's lemma is important to understand the model's underlying assumptions. The speaker then offers a simple derivation of Ito's lemma, which involves breaking time intervals into small intervals and examining the log-normal dynamics and randomness in stock price fluctuations. The cornerstone of Ito's lemma is found in the second derivative term of the option price equation.

  • 00:25:00 In this section, the speaker discusses the HJM model for interest rates and credit and explains how to simplify the equations involved. By neglecting random terms that are much smaller than linear ones and summing up all the equations, the speaker arrives at a term that appears stochastic but becomes deterministic in the large N limit. This is shown by demonstrating how a sum of random variables becomes more narrow and behaves in a deterministic way as N tends to infinity. The speaker recommends this exercise to understand the concept better.

  • 00:30:00 In this section, the speaker discusses the HJM model for interest rates and credit and how it depends on the standard normal distribution. By calculating the fourth moment of a normal variable, it can be determined that the probability distribution function becomes deterministic in the large N limit, meaning that the option pricing is possible. This is due to Ito's lemma, which is given without proof in many derivative books, but is a cornerstone of derivative pricing theory. The equation obtained through Ito's lemma is similar to the heat equation, and can be solved using standard methods.

  • 00:35:00 In this section, the professor discusses the HJM model for interest rates and credits and how it is used in Monte Carlo simulations to price derivatives. The Green's function of the equation used in this model is very similar to the probability distribution function for the stock price, with the difference being that the drift of the stock in the real world disappears altogether, and the interest rate is left. In the risk-neutral space, where all assets' drift is the interest rate and not the actual drift, dynamic hedging plays a crucial role, and only the volatility parameter matters for option pricing. As such, Monte Carlo simulations are used to simulate stock and other financial variables and calculate the derivative's price, making it a powerful framework that applies to several fields.

  • 00:40:00 In this section, the concept of Monte Carlo simulation is explained as a fundamental method for pricing derivatives and how it can be used to price exotic derivatives that are not easily obtainable using analytical methods. The video then goes on to explain the basics of interest rate derivatives and how they allow individuals and financial institutions to manage their interest rate risk better. The present value of money and discount factor are important concepts in finance, and forward rates are used as a convenient parametrization for the non-increasing function of discount factors.

  • 00:45:00 In this section, the concept of modeling forward rates for interest rate derivatives is discussed along with how the dynamics of the yield curve are different from those of the stock market. The yield curve is a one-dimensional object which shows how much different maturities make, with a typical curve being upward sloping which means paying higher interest rates for longer-term borrowing. An example of the yield curve is justified using the yield of a 10-year US Treasury note, where the US government borrows money to finance its activities and pays me some coupon over a period of time, along with returning the principal at the end of the period. The interest rates gradually going down in the recent years led to demand on borrowing being low.

  • 00:50:00 In this section, the speaker discusses the government's attempt to make interest rates as low as possible in order to alleviate the burden for corporations and private individuals during a recession. However, investing in non-productive assets, such as real estate, is not necessarily a guaranteed solution. Additionally, the speaker explains the role of LIBOR, a short-term rate at which financial institutions in London borrow money from each other on an unsecured basis, in derivatives pricing. Various derivatives, such as swaptions and cancel-able swaps, depend on the discount factors that are determined by forward rates; these serve as key parameters in Monte Carlo simulations for modeling interest rate derivatives.

  • 00:55:00 In this section, the speaker explains the concept of the swap market and how it can be used to obtain the discount factor, which tells us how much a dollar in the future is worth today. The market of swaps provides fixed payment values for different maturities, which when added together gives the swap rate. This rate can be used to understand how much it's worth investing today to cover the future payments, or the present value of the fixed rate payment. It is explained that floating rate security allows the present value of the payment to be equal to the notional value.

  • 01:00:00 In this section, the speaker explains the concept of OIS discounting and the function of the discount rate, which is used to price all kinds of swaps. Interest rate derivatives are based on the dynamics of the yield curve and the evolution of the discount function. The speaker also discusses the HJM framework for modeling and pricing derivatives, as well as other models such as the Ho-Lee, Hull-White, and CIR models. The speaker demonstrates the implementation of Ito's Lemma to derive the equation for drift and volatility of forward rates in the Monte Carlo simulation.

  • 01:05:00 In this section, the HJM model for interest rates and credit is discussed. The risk-neutral world has some complication for the interest rate, which can be realized by some equation dependent on sigma. Once this model is obtained, the model for interest rate derivatives is straightforward, similar to the stock world. The credit derivatives are discussed as an example of this HJM model, where there is a probability that one may not receive the money back in the case of corporation bonds. This risk, reflected in the coupons they pay, compensates for the possible default, and the credit default swap is the fundamental instrument in credit derivatives.

  • 01:10:00 In this section, the speaker explains the concept of credit default swaps, which are used to protect against default. He explains that if a bondholder experiences a default, the seller of the protection will compensate them for their loss. The speaker also discusses how market-implied survival probability is a fundamental concept in the world of credit derivatives. Additionally, he explains that the HJM model for credit derivatives describes the dynamics of hazard rates, which parametrize survival probabilities. Lastly, the speaker explains a very important type of derivative called corporate callable bonds which allow corporations to borrow $100 from someone and pay them 5% every year, but also have the option to return the $100 and close the deal.

  • 01:15:00 In this section, the speaker discusses the concept of callable debt and its advantages for corporations in managing their debt. He explains that callable debt allows the issuer to exercise an option to refinance at a lower rate in the event that interest rates decrease over time. This offers a significant cost savings for the issuer and is similar to the recent trend of refinancing mortgages for private individuals. The speaker also explains that pricing callable debt requires consideration of interest rate risk and issuer quality, as well as an understanding of hazard rates, which indicate the risky nature of the issuer. Overall, the speaker highlights the usefulness of risk-neutral pricing in evaluating the value of exotic derivatives and securities issued by large banks.

  • 01:20:00 In this section, the speaker explains the use of HJM model and Monte Carlo simulation for complicated pay-offs such as structured notes. Corporations need to raise money and pay interest, and investors are looking for returns higher than those offered by a non-risky option such as US treasury. Corporate bonds offer higher coupons but still have low returns after taxes and inflation. In this context, banks issue structured notes, which pay higher coupons if certain market conditions are satisfied. Investors who believe in their market view are attracted to this type of risk, where they can get a high return on their investment but can lose everything if they bear very high credit risk.

  • 01:25:00 In this section, the speaker explains the concept of structured notes, where instead of setting a plain coupon, a derivative is sold to enhance the coupon, resulting in a high return. Investors are looking for yield enhancement and are willing to take educated risks if they understand the economic meaning of each condition. The speaker mentions that simulating a stock market price, such as simulating the 30-year yield and 10-year yield, is required to model such unique financial instruments. He also mentions that these products are nonstandard, but banks are able to make extra money while saving money since they are cheaper to issue than plain vanilla bonds.

  • 01:30:00 In this section, Denis Gorokhov discusses the use of Monte Carlo simulations in pricing and hedging exotic financial products, such as credit derivatives. He explains that in order to simulate interest rates, the Heath-Jarrow-Morton (HJM) model is often used. Gorokhov also discusses the process of implying volatility from the market or historical estimates in order to price these complex products, with liquid derivatives being used to imply sigma and enable pricing of non-vital exotic derivatives. He also touches on the use of historical precedence to deduce implied frequencies of certain market outcomes, such as the probability of the S&P 500 dropping below a certain level.

  • 01:35:00 In this section, Denis Gorokhov discusses the use of Monte Carlo simulation to price exotic derivatives, such as double range accruals. He explains that while some derivatives can be priced using analytical approximations, often traders still use Monte Carlo simulation to accurately evaluate risk and price complex products. Gorokhov gives an example of how to use MATLAB to write a simple program to verify Black-Scholes formula, but notes that for more complicated models, such as HJM for term structure, calibration is necessary and derived from implied volatilities of liquid options.

  • 01:40:00 In this section, Denis Gorokhov explains that Monte Carlo analysis can be difficult for complicated models, but is necessary for more exotic derivatives that require risk-neutral pricing. While historical analysis can be used to test how a model's Greeks or sensitivity with respect to underlying stock performed historically, it has nothing to do with prediction, as risk-neutral pricing does not involve making predictions. The idea of dynamic hedging is to manage big portfolios of derivatives without taking any risks, charging a little extra to make a living. Banks may bear some residual risk due to the complexity of derivatives, but assumptions can be made to re-balance positions dynamically and move forward without losing money. Monte Carlo can be set up using implied parameters from current prices of various derivatives in the market, which gives a good baseline price. Other Monte Carlos can be done to provide a robust estimate of pricing and hedging costs, including stress scenarios.

  • 01:45:00 In this section, Denis Gorokhov explains the importance of stress testing for banks. He highlights that dynamic hedging and derivatives are not about just knowing the current price, but also about being able to predict market behavior in different scenarios such as interest rate changes or volatility spikes. Stress tests are conducted by large departments at banks to look at all kinds of risks and cash flows for the entire bank and not just one particular desk. These tests have become heavily regulated by the government, making it a non-trivial problem for big banks to manage.
24. HJM Model for Interest Rates and Credit
24. HJM Model for Interest Rates and Credit
  • 2015.01.06
  • www.youtube.com
MIT 18.S096 Topics in Mathematics with Applications in Finance, Fall 2013View the complete course: http://ocw.mit.edu/18-S096F13Instructor: Denis GorokhovTh...
 

25. Ross Recovery Theorem



25. Ross Recovery Theorem

In this video, Peter Carr dives into the Ross Recovery Theorem and its application in extracting market beliefs from market prices. The theorem introduces three probability measures: physical, risk-neutral, and the newly introduced recovered probability measure. These measures allow for the identification of natural probabilities associated with future events based on the market prices of derivatives.

Carr begins by explaining the concept of Arrow-Debreu securities, which are digital options that pay out based on a predetermined price level of an underlying asset. He delves into the estimation of prices for these securities and binary options. The focus then shifts to the change of numeraire technique in a univariate diffusion setting, which is used to derive results based on the Ross Recovery Theorem.

The speaker emphasizes the assumptions that facilitate the extraction of market beliefs from market prices. He highlights Ross's achievement in identifying these beliefs without relying on any additional assumptions, showcasing the power of the recovery theorem. By exploring the concept of numeraire portfolios, Carr explains the relationship between the growth optimal portfolio and the real-world growth rate.

The video further discusses the Kelly criterion, exotic and vanilla options, and the connection between digital options and market beliefs. It touches on the challenges faced in extending the theory to unbounded state spaces and the various assumptions made throughout the discussion.

Carr concludes by examining Ross's recovery theorem in detail, emphasizing its non-parametric approach to determining market beliefs without requiring specific parameters for market risk aversion. He emphasizes Ross's ability to extract market beliefs from market prices without invoking assumptions about representative investors or their utility functions.

Overall, this video provides a comprehensive exploration of the Ross Recovery Theorem, its applications, and the assumptions underlying its methodology. Carr's explanations offer valuable insights into the theory and its practical implications in extracting market beliefs from market prices.

  • 00:00:00 In this section, Peter Carr, Head of Global Market Modeling at Morgan Stanley, discusses a paper by Professor Stephen Ross of the Sloan School titled The Recovery Theorem. The theorem gives a sufficient set of conditions determining what Ross calls natural probabilities which are the probabilities regarding future events that can be determined from the market prices of derivatives, which are options traded on underlying securities such as stocks, indices, and currencies. Bloomberg publishes this information, which can be used with some assumptions to extract the implied market probabilities and output a probability transition matrix or density function.

  • 00:05:00 In this section, the three probability measures used in derivatives are introduced, including P, which stands for physical, and represents the actual probability of future states for, say, S&P 500. The risk-neutral probability measure, oftentimes represented by Q, refers to a fictitious device that is consistent with investors being risk-neutral, meaning that they require no premium for bearing risk. Finally, there's a third probability measure that isn't found in any literature that is about to be discussed.

  • 00:10:00 In this section, the speaker introduces the concept of the recovered probability measure, which will be denoted as R. This measure is derived from market prices and captures the market's beliefs regarding future events. The speaker differentiates R from the physical reality captured by the probability measure P, allowing for the possibility that the market could be wrong. However, some finance professionals who believe in market efficiency may set R equal to P every time. The speaker points out that R is named after Ross, who calls the recovered probability measure the natural probability measure, while describing the risk-neutral probability measure as unnatural. The latter measures offer prices of Arrow-Debreu securities, which would pay off depending on the probability of certain events happening. The speaker concludes that there are two securities, one for when the S&P 500 goes up and one for when it goes down, and only in an arbitrage-free world will the prices of these securities be equal to the probabilities of the events occurring.

  • 00:15:00 In this section, Peter Carr explains what economists refer to as Arrow-Debreu securities, which are actually digital options. Digital options are securities that provide a payout based on whether an underlying asset has surpassed a predetermined price level. The discussion of Arrow-Debreu securities leads to the concept of a representative agent, which is an investor who has all the mathematical properties of an investor, such as a utility function and endowment, and holds exactly the right amount of a portfolio to make it optimal for him/her. Instead of using this concept, Peter prefers to talk about something called a numeraire, which refers to the value of a portfolio that has nice properties, such as a growth optimal portfolio with a random growth rate in the long run.

  • 00:20:00 In this section of the video, Peter Carr discusses the Kelly criterion, a portfolio with the largest mean growth rate, which is widely popular among financial economists. However, there was resistance from some financial economists, like Paul Samuelson, who championed the opposition to the Kelly criterion. Samuelson even went to the extent of publishing an article with every word having one syllable, except for the last word 'syllable' itself. Later, Peter Carr briefly introduces the Arrow-Debreu security prices, which are digital options prices, and their connection to market beliefs, followed by a discussion on the Ross recovery theorem.

  • 00:25:00 In this section, Peter Carr explains how to apply the change of numeraire technique to a univariate diffusion setting to get results based on Ross recovery theorem. He defines the numeraire and clarifies that the value of the security must always be positive and explains how to change the numeraire to use an asset whose value is always positive. He also discusses the challenges faced in extending the work to an unbounded state space and how different assumptions are made in different parts of the talk. Finally, a member of the audience expresses their comments on the issue of numeraire, which leads to further discussion.

  • 00:30:00 In this section, Peter Carr explains the concept of a numeraire portfolio and how it works in investing. He uses the example of a portfolio with two securities, one risky and one riskless, where the investor puts a constant fraction of their wealth in each security. Every time the price changes, the investor needs to trade in order to maintain a constant fraction of their wealth invested in the risky asset. Carr also introduces the idea of digital options or binary options that pay off a unit of currency if an event comes true. He explains how to price these options and how they work in a finite-state setting with various discrete levels.

  • 00:35:00 In this section, the speaker explains the difference between exotic and vanilla options and introduces the concept of a butterfly spread payoff. He also explains how options can be combined to form a portfolio that perfectly replicates the payoff to an Arrow-Debreu security. The speaker notes that even if the FX market were not directly giving prices for digital options, the implicit price of a digital can be extracted from vanilla options. Additionally, he explains how assumptions can be made to estimate the probability of transitioning from one exchange rate to another.

  • 00:40:00 In this section, the speaker talks about making an assumption where you can take information at just today’s level, assuming the probability of a given percentage change is invariant to the starting level, and turning a vector bit of information given by the market into a matrix called the transition matrix. The speaker then moves on to discuss the frequency of transitions from one point to another and the reasons why the prices of the Arrow-Debreu securities differ from the real-world probability of such transitions, citing time value of money and risk aversion as reasons.

  • 00:45:00 In this section, the speaker explains Ross's Recovery Theorem, which deals with extracting market beliefs about future events from market prices. The speaker gives an example of Arrow-Debreu securities, where it's equally likely to go up or down, and it's thought to cost more to buy a security that has an insurance value. The speaker explains that Ross's paper makes assumptions that are mild and simple, showing the power of assumptions, and that Ross's recovery theorem enables one to extract market beliefs. Finally, the speaker discusses the terminology that Ross uses, such as pricing matrix, natural probability transition matrix, and pricing kernel, which is used to normalize prices affected by time value of money and risk aversion.

  • 00:50:00 In this section, the video explains the assumptions made in the recovery theorem proposed by Ross. The first assumption is that the function phi of two variables x and y has a specific form, which helps reduce the dimensionality of the search to a function of one variable and a scalar delta. The economic meaning of the function of one variable is the marginal utility, which indicates how much happiness one gets from each additional unit of consumption. The declining function is thought to be positive for every unit of consumption but brings less and less happiness as more units are consumed. Meanwhile, delta is a positive scalar that captures the time value of money and is associated with the numerator. The video adds that the findings aim to determine the composition of U prime with a function c of y rather than find U prime as a function of c.

  • 00:55:00 In this section, Peter Carr discusses the Ross Recovery Theorem, which provides a non-parametric approach to identifying market beliefs from market prices without the need for parameters that capture market risk aversion. Ross's assumptions allow for the determination of market beliefs by finding P, which represents market beliefs. By using Arrow-Debreu security prices, a positive solution exists, and using the pricing kernel phi, the ratio of A to P, allows for the identification non-parametrically. Prior to Ross's paper, researchers assumed a representative investor with a specific utility function, but Ross manages to identify market beliefs without invoking any such assumptions, making it easier to infer what the market believes from market prices.

  • 01:00:00 In this section, Peter Carr explains the concept of changing numeraire to understand what Ross did with his recovery theorem. A numeraire is a portfolio whose value is always positive, and there is a well-developed theory in derivatives pricing about how to change the numeraire. Carr starts with an economy with a so-called money market account and explains how the balance in this account can increase and is random. He also discusses how a bank could charge a negative rate, and this could impact the balance in the account. Carr refers to Perron-Frobenius theorem in his discussion and mentions that in a continuous setting, one could look for a function and a scalar instead of a vector and scalar.

  • 01:05:00 In this section, a theory called the Ross Recovery Theorem is discussed, which involves looking at a money market account and a set of risky assets and assuming there's no arbitrage between them. The uncertainty driving everything is called X, and it's assumed to be a diffusion, meaning it has continuous but non-differentiable sample paths. X could be anything, such as the level of the S&P 500 or an interest rate. If there's no arbitrage, then there exists a so-called risk-neutral probability measure denoted by Q, which is related but not equal to the Arrow-Debreu security prices. Under this probability measure Q, the expected return on all assets is the risk-free rate.

  • 01:10:00 In this section, we learn about the expected price change, which is the risk-free rate times the price and how that leads to the expected return. The video discusses how to change numeraires and measure asset values in different numeraire. It goes on to explain that the covariance between the dollar/pound exchange rate and IBM affects the growth rate of bank balances and is the key point when investing in IBM and putting gains in either an American bank or a British bank.

  • 01:15:00 In this section, the speaker discusses the process of finding a numeraire that will be correlated with the stocks to grow at a real-world drift of 9%, as opposed to the 1% initially set up in the risk-neutral measure Q. They mention that John Long's numeraire portfolio, also known as the growth optimal portfolio, is the numeraire that would convert the risk-free growth rate into the real-world growth rate. This section presents more assumptions, such as time homogeneity and bounded intervals of sample paths, to identify John Long's numeraire portfolio.

  • 01:20:00 In this section, the speaker explains how the notation for standard Brownian motion ‘W’ conflicted with the notation for wealth, also ‘W’, leading to the choice of the letter ‘Z’ for Wiener process. Further, he introduces ‘Long’s numeraire portfolio,’ which is called so after its inventor, John Long, though its positions are not all positive. While we know the risk-neutral drift of X, that’s b^Q(X), and the diffusion coefficient is A of X, we don't know the volatility of Long’s numeraire portfolio, sigma_L of X, which is essential to knowing the real-world drift. This sigma_L is also the covariance between Long’s numeraire portfolio and IBM, and it is the key to knowing the covariance, which is what’s relevant.

  • 01:25:00 In this section, Peter Carr explains how to find the volatility function sigma_L and the assumption that the value of John Long's portfolio is a function of X and D. This leads to an unknown positive function splitting into an unknown function of X and an exponential function of time. The unknown function of X solves a differential equation of the Sturm-Liouville problem, which shows that there is only a unique solution that delivers a positive function pi and a scalar lambda so that we learn the volatility of the numeraire portfolio at the end. Carr then talks about the efforts to extend this theory to unbounded intervals, and concludes that this theory is open for grad students to work on and solve.
25. Ross Recovery Theorem
25. Ross Recovery Theorem
  • 2015.01.06
  • www.youtube.com
MIT 18.S096 Topics in Mathematics with Applications in Finance, Fall 2013View the complete course: http://ocw.mit.edu/18-S096F13Instructor: Peter CarrThis gu...
 

26. Introduction to Counterparty Credit Risk



26. Introduction to Counterparty Credit Risk

This comprehensive video provides an in-depth exploration of Counterparty Credit Risk (CCR) and Credit Value Adjustment (CVA) and their significance in pricing derivatives. The speaker emphasizes the inclusion of CVA in derivative pricing, as it not only affects mark-to-market values but also introduces a portfolio effect that varies based on default risk. The accurate pricing of CVA is stressed, with a focus on non-linear portfolio effects and the complexities arising from asymmetries in receivables and liabilities. Strategies for managing CCR, such as collateralization and enterprise-level derivatives modeling, are discussed as means of addressing additional risks not captured by trade-level models. The video also touches upon challenges in modeling portfolios due to varying methodology requirements and the impact of CCR on the cash market.

To delve further into the content, the video presents a range of topics related to counterparty credit risk modeling. These include Schönbucher's model, martingale testing, resampling, and interpolation, highlighting the need for enterprise-level models to handle non-linear portfolio effects and supplement trade-level models. The speaker elaborates on finding the martingale measure of a CDS par coupon or forward CDS par rate, as well as the importance of martingale testing, resampling, and interpolation to ensure martingale conditions are met. The concept of changing the probability measure or numeraire to consistently model the entire yield curve is explored, accompanied by practical formulas and their implementation. The video concludes by acknowledging the complexity of modeling a portfolio of trades and suggesting potential research topics for further study.

Furthermore, the video addresses the significance of CCR in over-the-counter derivatives trading, emphasizing that default events can result in the loss of expected receivables. CVA is introduced as a means of adjusting the mark-to-market price by considering counterparty credit risk, similar to a corporate bond's risk. The impact of CCR on capital requirements, valuation, and return on equity is discussed, along with an example showcasing how the valuation of a trade can transform from apparent gains to losses when the counterparty defaults. Various risk categories, such as interest rate risk and liquidity funding risk, are examined, and strategies for managing CCR, such as CVA and CV Trading, are highlighted.

In addition, the video presents the concept of liability CVA, which focuses on the payable side and the likelihood of default by the bank or expert. It emphasizes the importance of accurately pricing CVA by understanding all the trades involved, including their non-linear option-like payoffs. The challenges posed by counterparty credit risk and liquidity funding risk are exemplified through the scenario of selling puts, with Warren Buffett's trade serving as a case study. The video also discusses managing CCR, exploring the use of credit-linked notes and the impact on credit spreads and bond issuance. Furthermore, it delves into the difficulties associated with modeling counterparty credit risk and the implications for the cash market, highlighting collateralization as an alternative and suggesting the purchase of collateralized credit protection from dealers as a possible strategy. Enterprise-level derivatives modeling is emphasized as a crucial aspect of understanding counterparty credit risk.

Moreover, the limitations of trade-level derivatives models are discussed, emphasizing the need for enterprise-level models to capture additional risks, such as non-linear portfolio risks. The complexities involved in modeling portfolios are explained, including variations in methodology requirements for each trade. Simulation, martingale testing, and resampling are introduced as techniques to address numerical inaccuracies and ensure martingale conditions are met. The speaker also explores forward swap rates, forward FX rates, and their relationship to martingales under specific measures and numeraire assets. Schönbucher's model is presented, focusing on survival measures, martingale measures, and the intricacies of finding the martingale measure of a CDS par coupon or forward CDS par rate. The video explains how the survival probability measure is defined using the Radon-Nikodym derivative and highlights the need to separately consider the impact of default in the model.

Furthermore, the speaker delves into martingale testing, resampling, and interpolation for counterparty credit risk modeling. Martingale testing involves ensuring that the numerical approximations satisfy the conditions of the model formula. If discrepancies arise, martingale resampling is employed to correct these errors. Martingale interpolation, on the other hand, is utilized when the model requires a term structure that is not explicitly available, allowing for interpolation while maintaining martingale relationships. The speaker provides insights into the process of interpolating and resampling to satisfy the martingale conditions for each term structure point.

The video emphasizes the significance of proper independent variables for interpolation, as it guarantees that the interpolated quantity automatically satisfies all the conditions of the martingale target. The identification of the martingale measure is explained, with the forward LIBOR serving as a martingale in its forward measure. The speaker notes the importance of changing the probability measure or numeraire to consistently model the entire yield curve, achieved through a straightforward change of numeraire.

Moreover, the importance of enterprise-level models is highlighted in managing non-linear portfolio effects and leveraging trade-level models for martingale testing, resampling, and interpolation. These models are crucial for effectively handling counterparty credit risk, as well as risks related to funding liquidity and capital. The speaker acknowledges time constraints but refers interested viewers to page 22 of the slides for an additional example. The professors conclude the lecture by expressing their appreciation for the students' dedication and hard work throughout the course, while offering themselves as a resource for future inquiries. They also announce that the class will be repeated in the upcoming fall, with potential modifications and improvements, encouraging students to visit the course website for further information.

Overall, this comprehensive video provides a detailed exploration of counterparty credit risk and its impact on pricing derivatives. It covers key concepts such as CCR, CVA, enterprise-level models, martingale testing, resampling, and interpolation. The video offers practical examples and insights into managing counterparty credit risk, emphasizing the importance of accurate pricing and addressing additional risks beyond trade-level models.

  • 00:00:00 In this section, we learn about counterparty credit risk that mainly exists in over-the-counter derivatives trading, where one counterparty may owe the other money. A default event, including bankruptcy, means losing part of the expected receivable. CVA, credit valuation adjustment, is the price of a counterparty credit risk, which adjusts the price of mark-to-market from a counterparty-default-free model. It's sometimes compared to a corporate bond's risk, called the issue risk.

  • 00:05:00 In this section, the speaker discusses the importance of Counterparty Credit Risk (CCR) and Credit Value Adjustment (CVA) in terms of pricing derivatives and its impact on capital requirements, valuation, and return on equity. He explains how a CVA should be included in the pricing of derivatives as it not only affects the mark-to-market but also adds a portfolio effect, which can vary depending on the portfolio’s default risk. The speaker also provides an example of how a trade's valuation may seem to make gains but can turn out to be a loss if the counterparty defaults.

  • 00:10:00 In this section, Yi Tang asks the class to indicate whether they think they have lost or gained $50 million- with a lack of people putting their hands up indicating they had gained. With this in mind, Tang asks why people might have lost $50 million, pointing out that in the example scenario given, clients would have started at $0 so would be at a +50 million net position, yet many perceived this as a loss. Tang identifies intermediary loss as the cause, with dealers being required to hedge by default. CVA and CV Trading are highlighted as mitigation strategies here, with CVA defined as the price of a counterparty credit risk.

  • 00:15:00 In this section, the concept of credit value adjustment (CVA) is explained, including formulas and their practical implementation. The video emphasizes the importance of understanding the representations and signs in the formula, as missing these signs could lead to confusion. Furthermore, the non-linear portfolio effects, such as offsetting trades, and asymmetry in the handling of receivables and liabilities, like an option-like payoff, are also discussed to demonstrate the complexities of pricing CVA. It highlights the need to know all the trades in order to price CVA accurately.

  • 00:20:00 In this section, a risk expert explains how modeling cross-asset derivative trades can be difficult in counterparty credit risk due to non-linear option-like payoffs. The expert presents the concept of liability CVA, which is similar to the asset CVA, but on the payable side, when the bank or the expert has a likelihood of default. They also believe it is unnecessary to consider which party is first to default when pricing CVA and presents an example wherein the trade PV was zero on day one and became $100 million later on, with properly hedged counterparty risk, and whether there are any other risks.

  • 00:25:00 In this section, Yi Tang discusses the various categories of risk, including interest rate risk and key man risk, and highlights how market risks are hedged to handle the interest rate risk of the trade. Yi also introduces the cash flow liquidity funding risk, explaining that the trade needs funding for uncollateralized derivative receivables even though they do not have the money currently. He further explains that using uncollateralized payable funding benefits to partially hedge the funding risk in uncollateralized derivative receivables can be useful in managing this liquidity risk. The example of studying put options or put spreads is also highlighted to showcase the application of CVA.

  • 00:30:00 In this section, the video discusses the strategy of selling put, which earns traders income and allows them to potentially benefit from stock price increases. Warren Buffett famously made a trade selling long-dated puts on four leading stock indices, collecting about four billion in premium without posting collateral. The trade posed challenges, including the counterparty credit risk, or the likelihood of Warren Buffett defaulting. There was also a liquidity funding risk, as Buffett could potentially owe more money in a market sell-off. Traders charged Buffett for these risks and funding costs, but some dealers may not have had a proper CV trading desk for risk management.

  • 00:35:00 In this section, the speaker delves into Counterparty Credit Risk (CCR) and how to manage it. He explains how counterparty risks are hedged and how, unlike a bond, the exposure to CCR may change over time. He provides a detailed example of how a "credit-linked note" type of trade was structured to manage the CCR, but warns that managing CCR could drive credit spreads even wider and potentially affect bond issuance. The section ends with a discussion on how Berkshire Hathaway managed its CCR during the 2008 financial crisis, by avoiding cash flow drain despite suffering unrealized mark-to-market losses.

  • 00:40:00 In this section, the speaker delves into the concept of counterparty credit risk and its impact on the cash market. When there is a high credit spread from the CDS market, it could lead to a higher demand for bonds, driving up funding costs. Collateralization is explored as an alternative while addressing the problem of who loses money. The speaker then discusses ways of terminating the infinite series caused by credit risk and suggests that the simple strategy would be to buy collateralized credit protection from a dealer. Finally, he highlights enterprise-level derivatives modeling as an important concept to understand.

  • 00:45:00 In this section, the speaker explains the limitations of trade-level derivatives models, which involve modeling each trade independently, aggregating their PV and Greeks via linear aggregation to get the PV of the portfolio. However, this approach doesn't account for additional risks, such as non-linear portfolio risks, which require further modeling. The speaker discusses one such risk, counterparty risk, and how enterprise-level models can help handle these risks more efficiently by modeling the counterparty risk in trades. The speaker explains the complexity of developing and implementing such models, including a significant amount of martingale testing and interpolation.

  • 00:50:00 In this section, the instructor explains the difficulties in modeling a portfolio of trades due to variations in methodology requirements for each trade. Simulation is generally used and can introduce numerical inaccuracies, which can be corrected through martingale testing and resampling, which enforces martingale conditions in the numerical procedure. The section also reviews examples of martingale measures for forward price, forward LIBOR, forward FX rate, forward CDS par coupon, and forward swap rate. Each of these measures depends on the ratio of traded assets with no intermediate cash flow or zero-coupon bonds.

  • 00:55:00 In this section, the speaker discusses forward swap rates and forward FX rates and how they relate to martingales under particular measures with specific numeraire assets. They explain the technique of changing probability measure and how the pricing of a traded security is measure-independent. However, credit derivatives bring forth a problem as the risky annuity measure could be zero in certain cases where the reference credit entity has zero recovery upon default, and they discuss potential solutions for this mathematical problem.

  • 01:00:00 In this section, the speaker explains Schönbucher's model in credit risk, which focuses on survival measures. The model deals with the difficulty of having a 0 in the numeraire, the risky annuity when the recovery is 0. The speaker discusses how to find the martingale measure of a CDS par coupon or forward CDS par rate, which is the starting point of the martingale model. The survival probability measure is defined using the Radon-Nikodym derivative, and a martingale condition is created. While the probability measures are not equivalent, it is still possible to do a change of probability measure, but the model needs to separately consider what will happen when default occurs.

  • 01:05:00 In this section, the speaker introduces martingale testing, resampling and interpolation for counterparty credit risk modeling. Martingale testing involves testing if the conditions of the model formula are satisfied numerically. If not, martingale resampling is used to correct this error due to numerical approximations. Martingale interpolation is used when a model requires a term structure that is not in the model, and it interpolates while guaranteeing martingale relationships. The speaker explains how they interpolate and resample by satisfying the martingale conditions for each term structure point.

  • 01:10:00 In this section of the video, the speaker discusses the martingale modeling, highlighting the need for the proper independent variable for interpolation and how this technique guarantees that the interpolated quantity automatically satisfies all the conditions of the martingale target. The martingale measure can be identified by using the forward LIBOR as a martingale in its forward measure and performing martingale representation under certain technical conditions. The speaker notes that changing the probability measure or changing the numeraire is necessary to model the entire yield curve consistently, and this is achieved through a simple change of numeraire.

  • 01:15:00 In this section, Yi Tang explains the need for enterprise-level models to handle non-linear portfolio effects and leverage trade-level models for martingale testing, martingale resampling, and interpolation. He emphasizes that these models are critical for handling counterparty credit risk as well as funding liquidity capital risks. Yi Tang also mentions that due to time limits, he won't be able to go through another example, but the interested viewers can check page 22 of the slides. The professors wrap up the lecture by adding final comments and suggesting research topics for the final paper. They acknowledge the course's challenging nature and appreciate the students' hard work and efforts in the class.

  • 01:20:00 In this section, the professors conclude the course by expressing their hope that the students found it valuable and that they'll be a good resource for them in the future. They encourage students to contact them for any questions or suggested topics for future classes. They also announced that a repeat of the class will be held the next fall with potential changes and improvements. Lastly, they advise students to visit the website for additional information.
26. Introduction to Counterparty Credit Risk
26. Introduction to Counterparty Credit Risk
  • 2015.01.06
  • www.youtube.com
MIT 18.S096 Topics in Mathematics with Applications in Finance, Fall 2013View the complete course: http://ocw.mit.edu/18-S096F13Instructor: Yi TangThis lectu...
Reason: