Quantitative trading - page 13

 

Wall Street: The speed traders


Wall Street: The speed traders

Many people are unaware that the majority of stock trades in the United States are no longer executed by human beings but rather by robotic computers. These supercomputers are capable of buying and selling thousands of different securities in the blink of an eye. High-frequency trading, as it is known, has become prevalent on Wall Street in recent years and played a role in the mini market crash last spring when the Dow Jones Industrial Average plummeted 600 points in just 15 minutes.

The Securities and Exchange Commission and members of Congress have started raising tough questions about the usefulness, potential dangers, and suspicions of market manipulation through computer trading. The shift from human traders to machines has transformed the landscape of the New York Stock Exchange, which was once the center of the financial world. Now, less than 30% of trading occurs on the exchange floor, with the rest being conducted through electronic platforms and alternative trading systems.

Two electronic stock exchanges, BATS and Direct Edge, owned by big banks and high-frequency trading firms, have emerged and trade over a billion shares per day at astonishing speeds. High-frequency trading firms like Tradeworks, run by Manoj Narang and a team of mathematicians and scientists called quants (quantitative analysts), engage in this practice. They execute trades for fractions of a second, aiming to make a profit of a penny or less per trade. These firms rely on complex mathematical algorithms programmed into their computers to analyze real-time data and make split-second decisions.

One key aspect of high-frequency trading is that the computers have no understanding of the companies being traded. They do not know the value of the companies, their management, or any other qualitative factors. The trading decisions are purely based on quantitative factors, probability, and statistical analysis. This approach allows for capturing fleeting opportunities in the market but disregards fundamental factors.

High-frequency traders invest heavily in supercomputers and infrastructure to gain a speed advantage. The closer their computers are located to the stock exchange's servers, the quicker they receive critical market information. Even a few milliseconds of advantage can result in significant profits. Critics argue that high-frequency traders exploit this advantage to front-run orders, manipulate stocks, and extract money from the market without adding any real value.

While proponents claim that high-frequency trading increases market liquidity, reduces transaction costs, and tightens stock spreads, critics believe it undermines fairness and transparency. The high-speed nature of trading and the complexity of algorithms make it difficult for regulators to monitor and ensure a level playing field. The "flash crash" of 2010, when the Dow Jones plunged 600 points in a matter of minutes, exposed the potential risks associated with high-frequency trading and the lack of control.

Regulators and lawmakers have begun proposing reforms to address concerns related to high-frequency trading. The Securities and Exchange Commission is considering measures to track and identify high-frequency trades, and circuit breakers have been implemented to halt trading in cases of extreme price volatility. However, further changes are needed to restore confidence in the integrity of the market and provide transparency to average investors who feel that the system is rigged against them.

In recent years, high-frequency traders have expanded their activities into currency and commodity markets, further raising concerns about their impact on financial markets. The evolution of technology has outpaced the ability of regulators to keep up, and there is a growing call for reforms that strike a balance between innovation and market integrity.

Wall Street: The speed traders
Wall Street: The speed traders
  • 2011.06.05
  • www.youtube.com
Steve Kroft gets a rare look inside the secretive world "high-frequency trading," a controversial technique the SEC is scrutinizing in which computers can ma...
 

Mathematical Modeling and Computation in Finance: With Exercises and Python and MATLAB Computer Codes

"Mathematical Modeling and Computation in Finance: With Exercises and Python and MATLAB Computer Codes", by C.W. Oosterlee and L.A. Grzelak, World Scientific Publishing, 2019.

"Mathematical Modeling and Computation in Finance: With Exercises and Python and MATLAB Computer Codes" is an invaluable book that explores the intersection of mathematics, finance, and computer science. Written by experts in the field, it provides a comprehensive guide to understanding and implementing mathematical models in finance using popular programming languages like Python and MATLAB.

The book begins by introducing readers to the fundamental concepts of mathematical modeling in finance, including probability theory, stochastic calculus, and optimization techniques. It emphasizes the practical aspects of modeling and computation, highlighting the importance of numerical methods and simulation in solving real-world financial problems.

One of the standout features of this book is its inclusion of numerous exercises and computer codes in Python and MATLAB. These exercises allow readers to actively engage with the material, reinforce their understanding of the concepts, and develop their programming skills. By working through the exercises and implementing the provided codes, readers can gain hands-on experience in applying mathematical models to finance and enhance their proficiency in using these programming languages for financial analysis.

The book covers a wide range of topics relevant to finance, such as option pricing, portfolio optimization, risk management, and asset allocation. It delves into advanced topics like volatility modeling, interest rate modeling, and credit risk modeling, providing readers with a comprehensive understanding of the mathematical techniques used in financial modeling.

The authors strike a balance between theoretical rigor and practical application throughout the book. They provide clear explanations of the underlying mathematical concepts and algorithms, accompanied by real-world examples and case studies. This approach enables readers to grasp the theoretical foundations while also gaining insights into how these models can be applied to solve practical financial problems.

Furthermore, the book highlights the advantages and limitations of different modeling approaches, equipping readers with the critical thinking skills necessary to make informed decisions when choosing and implementing models in real-world scenarios.

"Mathematical Modeling and Computation in Finance: With Exercises and Python and MATLAB Computer Codes" is an excellent resource for students, researchers, and practitioners in the field of finance who are looking to deepen their understanding of mathematical modeling and computational methods. Its combination of theoretical explanations, practical exercises, and ready-to-use computer codes makes it an essential companion for anyone interested in applying mathematical techniques to solve financial problems.

https://github.com/LechGrzelak/Computational-Finance-Course

GitHub - LechGrzelak/Computational-Finance-Course: Here you will find materials for the course of Computational Finance
GitHub - LechGrzelak/Computational-Finance-Course: Here you will find materials for the course of Computational Finance
  • LechGrzelak
  • github.com
The course is based on the book: "Mathematical Modeling and Computation in Finance: With Exercises and Python and MATLAB Computer Codes", by C.W. Oosterlee and L.A. Grzelak, World Scientific Publishing, 2019. Content of the course:
 

This course Computational Finance is based on the book: "Mathematical Modeling and Computation in Finance: With Exercises and Python and MATLAB Computer Codes"


Computational Finance: Lecture 1/14 (Introduction and Overview of Asset Classes)

This comprehensive lecture serves as an introduction to the fascinating fields of computational finance and financial engineering, covering a wide range of topics essential for understanding modern finance. The lecturer emphasizes the importance of theoretical models from mathematical and computational finance, which are utilized to create practical models for pricing derivatives under various scenarios.

In the course on computational finance, the students will delve into various topics that are crucial to understanding and applying practical financial methods. Led by the instructor, Leth Lag, the course will emphasize the implementation of efficient programming techniques using Python for simulation and option pricing. This comprehensive program is designed for individuals interested in finance, quantitative finance, and financial engineering. It will cover essential concepts such as implied volatilities, hedging strategies, and the fascinating realm of exotic derivatives.

Computational finance is an interdisciplinary field situated between mathematical finance and numerical methods. Its primary objective is to develop techniques that can be directly applied to economic analysis, combining programming skills with theoretical models. Financial engineering, on the other hand, encompasses a multidisciplinary approach that employs financial theory, engineering methods, mathematical tools, and programming practices. Financial engineers play a critical role in creating practical models based on mathematical and computational finance, which can be utilized to price derivatives and handle complex financial contracts efficiently. These models must be theoretically sound and adaptable to diverse scenarios.

The course will shed light on different asset classes traded in computational finance, including stocks, options, interest rates, foreign exchange, credit markets, commodities, energy, and cryptocurrencies. Cryptocurrencies, in particular, offer exposure to various asset classes and can be employed for hedging purposes. Each asset class has its unique contracts used for risk control and hedging strategies. The Over-the-Counter (OTC) market, with its multiple counterparties, presents additional complexities that need to be understood.

The lecturer will explore the role of cryptocurrencies in finance, emphasizing their diverse features and the need for specific methodologies, models, and assumptions for pricing. Additionally, the market shares of different asset classes, such as interest rates, forex, equities, commodities, and credit default swaps (CDS), will be examined. While options represent a relatively small portion of the financial world, they offer a distinct perspective on financial and computational analysis.

The topic of options and speculation will be thoroughly discussed, highlighting how options provide an alternative to purchasing stocks by allowing individuals to speculate on the future direction of a stock with a relatively small capital investment. However, options have a maturity date and can lose value if the stock price remains unchanged, making timing a crucial factor in speculation. The course will provide an introduction to financial markets, asset classes, and the role of financial engineers in navigating these complex landscapes. Stocks, as the most popular asset class, will be explored in detail, emphasizing the concept of ownership and how stock value is influenced by company performance and future expectations.

The lecture will shed light on the stochastic nature of stock behavior in the market, influenced by factors such as supply and demand, competitors, and company performance. The expected value of a stock may differ from its actual value, leading to volatility. Volatility is a crucial element in modeling and pricing options as it determines the future fluctuations in stock prices. Additionally, the lecture will distinguish between two types of investors: those interested in dividend returns and those seeking growth opportunities.

The concept of dividends and dividend investing will be introduced, emphasizing how dividends provide a steady and certain investment as companies distribute payments to shareholders regularly. However, dividend payments can vary, and high dividend yields may indicate increased risk in a company's investments. The lecture will touch briefly on interest rates and money markets, acknowledging that these topics will be covered more extensively in a follow-up course.

Inflation and its impact on interest rates will be discussed, elucidating how central banks control inflation by adjusting interest rates. The lecture will explore the short-term benefits and long-term implications of lowering interest rates, as well as alternative strategies such as modern monetary theory or asset purchases by central banks. Moreover, the role of uncertainty among market participants in determining interest rates and the hidden tax effect of inflation on citizens will be explained. The lecture will conclude by delving into the topic of risk management in lending. The lecturer will highlight the potential risks faced by lenders, such as borrowers going bankrupt or defaulting on loans. To mitigate these risks, lenders often charge a risk premium to ensure they are adequately compensated for any potential losses.

Moving forward, the speaker will shift the focus to interest rates and their significance in finance. They will explain how interest rates affect various financial instruments, including savings accounts, mortgages, and loans. The concept of compounding interest will be introduced, emphasizing the notion that one unit of currency today is worth more than the same unit in the future due to factors like inflation. The two main methods of calculating interest rates, simple and compounded, will be discussed, with a detailed explanation of their differences and practical examples.

The speaker will then delve deeper into compounded interest rates, particularly for investments with a one-year maturity. They will explain the mathematical modeling of compounded rates using the exponential function, where one unit of currency is multiplied by e raised to the power of the interest rate. Furthermore, the speaker will describe how this mathematical representation aligns with the differential equations that govern savings accounts, leading to the determination of the multiplication factor used to discount future cash flows. However, the speaker will note that in reality, interest rates are not constant but vary over time, as evidenced by different instruments such as tenors and prices for currencies like the Euro and the USD.

The graphs representing interest rates and market liquidity for the Eurozone and the dollar will be discussed. Notably, the current state of the Eurozone reveals negative yields across all maturities up to 30 years, implying that investing in government bonds within the Eurozone could result in a loss of money. The speaker will suggest that individuals may prefer to exchange Euros for dollars and invest in US bonds, as they offer higher yields. Nevertheless, this approach carries risks, including potential losses due to foreign exchange rate fluctuations. The speaker will emphasize that interest rates are time-dependent and subject to market dynamics.

The lecturer will shed light on the concept of buying bonds, highlighting that bond buyers often pay more than the actual worth of the bond. Consequently, the value of money invested in bonds may depreciate over time, and inflation can erode the investment's value. Major buyers of bonds, such as pension funds and central banks, will be mentioned, underscoring their significant role in the bond market. Furthermore, the lecturer will touch upon the concept of volatility, which measures the variation in financial prices over time. Volatility is calculated using statistical measures like variance and provides insights into the tendency of a market or security to fluctuate, introducing uncertainty and risk.

The course will then shift its attention to asset returns and volatility, two crucial concepts in computational finance. Asset returns refer to the gains or losses of a security within a specific time period, while volatility measures the variance of these returns. A highly volatile market indicates significant price swings in a short span, resulting in heightened uncertainty and risk. The VIX index, an instrument that gauges market uncertainty, will be introduced. It utilizes out-of-the-money or put options and is commonly employed by investors to protect their capital in the event of market value decline. The importance of timing and predicting exposure times will be emphasized, as they can be challenging in practice.

The instructor will discuss the intricacies of analyzing the volatility of various indices, including the VIX index. They will acknowledge the difficulties in mathematically modeling volatility due to market circumstances and fluctuations. Additionally, European options, which serve as fundamental building blocks for derivative pricing based on volatility, will be introduced. The lecturer will provide a clear distinction between call options and put options, explaining that call options grant the holder the right to buy an asset at a predetermined price and date, while put options give the holder the right to sell an asset at a predetermined price and date, essentially acting as insurance.

With the foundation of options established, the lecturer will present an overview of options within different asset classes. They will emphasize the two key types of options: call options and put options. In the case of a call option, the buyer has the right to sell the underlying asset to the writer at a specified maturity date and strike price. This means that at maturity, the writer is obliged to buy the stock at the strike price if the buyer chooses to exercise the option. On the other hand, a put option grants the buyer the right to sell the underlying asset to the writer at a specified maturity date and strike price. At maturity, the writer must purchase the stock at the specified strike price if the buyer exercises the option.

To illustrate the potential profitability of options, the lecturer presents two graphical representations—one for call options and another for put options. These graphs depict the potential profit or loss based on the value of the underlying stock. By examining the graphs, viewers can gain insights into how changes in the stock's value can affect the profitability of options.

Throughout the course, the instructor will explore additional advanced topics related to computational finance, including modeling of derivatives, efficient programming implementation, and the use of Python for simulation and option pricing. They will program live during the sessions and analyze results collaboratively with the viewers, providing hands-on experience and practical insights.

The course is specifically designed for individuals interested in finance, quantitative finance, and financial engineering. It aims to bridge the gap between mathematical finance and numerical methods, offering interdisciplinary knowledge and skills required to tackle real-world financial problems. The concepts of implied volatilities, hedging strategies, and exotic derivatives will also be covered, providing a comprehensive understanding of computational finance and its applications in the financial industry.

By the end of the course, participants will have gained a solid foundation in computational finance, financial engineering, and the practical application of numerical methods. They will be equipped with the tools and knowledge to develop and implement models for pricing derivatives, managing risks, and analyzing financial data. This course serves as a stepping stone for those seeking to pursue careers in finance, quantitative analysis, or financial engineering, empowering them to make informed decisions and contribute to the ever-evolving field of computational finance.

  • 00:00:00 The course will cover various topics related to computational finance, including modeling of derivatives, efficient implementation of programming, and the use of Python for simulation and option pricing. The course instructor, Leth Lag, will program live and analyze results together with the viewers. The course is designed for those interested in finance, quantitative finance, and financial engineering, and will also cover the concepts of implied volatilities and hedging. The course will conclude with a discussion of exotic derivatives.

  • 00:05:00 In this section, the focus is on computational finance, which is an applied computer science branch that deals with practical financial problems and emphasizes practical numerical methods. This field is interdisciplinary, between mathematical finance and numerical methods. The goal of computational finance is to develop techniques that can be directly applied to economic analysis, and this involves using programming and theoretical models. Another aspect discussed is financial engineering, which is a multi-disciplinary field that applies financial theory, engineering methods, mathematics tools, and programming practice. Financial engineering and computational finance are related, and financial engineers develop models that are practical, workable, fast, and efficient and can be used by financial institutions for pricing derivatives and implementing hedging strategies.

  • 00:10:00 In this section, the role of financial engineering in developing models for complex financial contracts is discussed. Financial engineers use theoretical models from mathematical and computational finance to create practical models that can be used to price derivatives and other complicated contracts. The models have to be theoretically correct and perform in a wide range of scenarios. Financial engineering is driven by a customer's needs and requires a multi-disciplinary skillset, including quantitative modeling and programming. The lecture also explains the main asset classes in finance, including stocks and options exchanges, which financial engineers price using their models and tools.

  • 00:15:00 In this section, the speaker discusses the various asset classes that are traded in computational finance. There are stocks, options, interest rates, foreign exchange, the credit market, commodities, energy, and cryptocurrencies. In the case of cryptocurrencies, there are many different types depending on their features and they can also be considered as an option market. The speaker touches on different contracts within each asset class used to hedge and control risk. Additionally, the speaker notes that some markets, such as the OTC market, are designed for the risk profile of clients and involve multiple counterparties.

  • 00:20:00 In this section, the speaker discusses the role of cryptocurrencies in finance and explains how they are designed to offer exposure to different asset classes. Cryptocurrencies can be used to hedge risks, and some also provide exposure to stocks, gold, silver, and oil. Different cryptocurrencies have unique characteristics, requiring different methodologies, models, and assumptions for pricing. The speaker then goes on to discuss the market share of different asset classes, such as interest rates, forex, equities, commodities, and CDS. While options are a tiny part of the financial world, they are still important and offer a unique perspective on financial and computational analysis.

  • 00:25:00 In this section, the topic of options and speculation is discussed. Options can be a cheaper alternative to buying stocks, allowing one to bet on the future direction of a stock with a small capital investment. However, options have a maturity date and lose value if nothing happens to the stock price during that time, making timing a significant challenge in speculation. The lecture introduces the concept of financial markets, asset classes, and the role of a financial engineer. The first and most popular asset class, stocks or equities, is also explored, including how buying a stock means becoming an owner of the company and how the value of a stock is dependent on the company's performance and expectations of future payments.

  • 00:30:00 In this section, the speaker discusses the behavior of stocks in the market, which is stochastic and influenced by various factors such as supply and demand, competitors, and company performance. This means that the expected value of a stock can differ from its actual value, resulting in volatility. Volatility is an important element in modeling and pricing options, as it determines the fluctuations of a stock's price in the future. Additionally, the owner of a stock theoretically owns a piece of the company and can receive dividends or reap benefits from the growth of the stock. There are two types of investors: those interested in returns from dividends and those looking for growth opportunities.

  • 00:35:00 In this section of the video, the concept of dividends and dividend investing is discussed. Dividend investing is attractive to those who want a steady and certain investment, as every quarter or semi-annually, a company will give payments to shareholders. However, dividends can vary from year to year, and high dividend payments may indicate more risk in a company's investments. The video also briefly touches on interest rates and money markets, noting that interest rates are a percentage of the principle, but this topic will be covered in a follow-up course.

  • 00:40:00 In this section, the lecturer discusses inflation and the impact of interest rates on the economy. When the economy is doing well and money circulation increases, there is a risk of inflation, which can be controlled by banks through an increase in interest rates. However, lowering interest rates can provide a short-term boost to the economy, but this is not a long-term solution. Central banks may use modern monetary theory or buying assets in the market as an alternative. Additionally, the lecturer explains how interest rates are affected by market participants' uncertainty towards receiving money from banks and how inflation can act as a hidden tax on citizens. Finally, the lecturer talks about risk management in lending and suggests that a borrower may go bankrupt or default on loans, which leads to a risk premium to ensure that the lender is compensated for any loss.

  • 00:45:00 In this section, the speaker discusses interest rates and their importance in finance. They explain how interest rates affect savings accounts, mortgages, and loans. The speaker discusses how interest rates can be modelled and that the simplest concept is that one euro today is worth more than one euro in a year's time due to factors such as inflation. The two main ways of compounding and calculating interest rates are simple and compounded, with compounded interest taking place over the lifetime of the investment. The speaker defines these terms and provides examples to illustrate them.

  • 00:50:00 In this section, the speaker discusses the concept of compounded interest rates for a one-year maturity. The compounded rate is calculated as one euro times e to the power r. The speaker explains how this is modeled mathematically by describing a differential equation that describes savings accounts. The solution to the differential equation gives the multiplication factor, which is used to discount future cash flows. However, the speaker notes that in reality, interest rates are not constant but time-dependent, which is illustrated by various instruments such as tenors and prices for Europe and the USD.

  • 00:55:00 In this section of the video, the speaker discusses the graphs representing interest rates and market liquidity for Eurozone and the dollar. The graphs show that currently, all yields for Euro up to 30 years are negative, meaning that investing in government bonds in Europe would result in losing money. The speaker suggests that people would prefer to exchange Euros for dollars and invest in US bonds as they provide higher yields. However, there is a risk involved as the foreign exchange rate may decrease, deteriorating potential profits. The speaker also notes that interest rates are time-dependent and not constant.
  • 01:00:00 In this section, the lecturer discusses the concept of buying bonds. Bond buyers pay more than the bond is worth, and as a result, the value of money will deteriorate over time and there may also be inflation, causing a loss of investment. Pension funds and central banks are the main buyers of bonds. The lecturer also touches upon the concept of volatility, which is a measure of the variation of financial prices over time and is calculated by using variance of the statistical measure of the tendency of a market or security to rise or fall within a period of time.

  • 01:05:00 In this section, we learn about asset returns and volatility, two important concepts in computational finance. Asset returns are the gains or losses of a security within a specific time period, and volatility measures the variance of these returns. A highly volatile market means that prices can swing drastically in a short amount of time, which can lead to uncertainty and risk. The VIX index is an example of a market instrument that measures uncertainty and is constructed using out of the money or put options. It is often used by investors to protect their capital in the event of a drop in market value. However, timing is crucial when using it, as the exposure times can be very short and difficult to predict.

  • 01:10:00 The instructor discusses the volatility of various indices, including the VIX index, and how it can be difficult to analyze mathematically due to market circumstances and fluctuations. He then introduces European options, which are a fundamental building block of derivative pricing on volatility, with a one-to-one correspondence between option price and volatility. The instructor explains the differences between call and put options, with a call option giving the holder the right to buy an asset at a future date for a set price, while a put option gives the holder the right to sell an asset at a future date for a set price, essentially acting as insurance.

  • 01:15:00 In this section, the lecturer presents an overview of options within asset classes and identifies two key types of options: call options and put options. In the case of a call option, the buyer can sell to the writer at a specified maturity date and strike price, which means that at maturity, the writer is obliged to sell stock at the strike price. In contrast, for a put option, the buyer can sell to the writer, which again is done at maturity, but this time the writer must buy stock at the specified strike price. The lecturer then presents two graphs, one for both types of options, highlighting their potential profit depending on the value of the stock.
Computational Finance: Lecture 1/14 (Introduction and Overview of Asset Classes)
Computational Finance: Lecture 1/14 (Introduction and Overview of Asset Classes)
  • 2021.02.21
  • www.youtube.com
Computational Finance Lecture 1- Introduction and Overview of Asset Classes▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬This course is based on the book:"Mathemati...
 

Computational Finance: Lecture 2/14 (Stock, Options and Stochastics)


Computational Finance: Lecture 2/14 (Stock, Options and Stochastics)

The instructor begins by providing an overview of the course, emphasizing the importance of understanding trading confidence, hedging, and the necessity of mathematical models in finance. They delve into the topic of pricing put options and explain the concept of hedging. Stochastic processes and asset price modeling are also covered, with the introduction of Ito's lemma as a tool for solving stochastic differential equations.

To illustrate the practical application of these concepts, the instructor presents an example of a training strategy where an investor seeks to protect their investment from potential stock value decrease. They suggest buying insurance in the form of put options to ensure a minimum amount of money in a worst-case scenario.

Moving on to options trading, the lecturer focuses on the use of put options to protect against downward movements in stock prices. However, they note that buying put options can be expensive, particularly when the stock's volatility is high, as exemplified by Tesla. To reduce option costs, one can decrease the strike price, but this means accepting a lower price for the stock. The lecturer provides a screenshot from Reuters showcasing different types of options available in the market, categorized by maturity and strike price. They also explain the relationship between strike price and option prices for call and put options.

Implied volatility is introduced as a measure of market uncertainty. The lecturer explains that lower strike prices are associated with higher implied volatility. Delta, which measures an option's value dependence on the underlying asset, is also introduced. The video then delves into the concept of hedging and how a ratio can be established to achieve a risk-free portfolio, albeit potentially limiting gains if the stock does not increase in value. Hedging with options is discussed, highlighting its suitability for short-term investments, but noting its potential costliness during periods of high volatility.

Options trading is further explored as a means of hedging and risk reduction. The lecturer suggests that options are typically more desirable for short-term investments with a definite maturity, as they can be costly for long-term investments. The concept of hedging with calls is introduced, emphasizing how selling options can help reduce risk for investors holding a large portfolio of stocks. However, caution is advised against selling too many calls, as it can restrict potential upside and always carries a certain degree of risk.

The video then delves into commodities, explaining that they are raw materials used as hedges against inflation due to their unpredictable but often seasonal price patterns. Commodity trading is primarily conducted in the futures market, where deals are made to buy or sell commodities at a future date. The distinction between electricity markets and other commodities is highlighted, with electricity posing unique challenges due to its inability to be fully stored and its impact on derivative predictability and value.

The lecturer proceeds to discuss currency trading as an asset class, commonly referred to as the foreign exchange market. Unlike traditional buying or selling of a particular exchange rate, individuals exchange amounts of money between currencies. The lecturer emphasizes the role of the US dollar as the base currency and a reserve currency. They also touch upon the manipulation of exchange rates by Central Banks to strengthen or weaken currencies. Additionally, a small application of foreign exchange derivatives for hedging currency risks in international business is mentioned.

The speaker explains how banks and financial institutions can purchase or sell insurance against fluctuating exchange rates to manage investment uncertainties. Investing in different countries can introduce uncertainties due to varying currency strengths and monetary policies, leading to uncertain returns. Computational finance plays a crucial role in managing and calculating risks associated with such investments by modeling uncertainties and considering various factors. The speaker further notes that bitcoins can be considered foreign exchange rates and discusses their hybrid nature as a regulated commodity with value determined through exchange against the US dollar. The volatility of bitcoins makes their future value challenging to predict.

Furthermore, the speaker explores the concept of risk-neutral pricing, which is a fundamental principle in options pricing. Risk-neutral pricing assumes that in a perfectly efficient market, the expected return on an option should be equal to the risk-free rate. This approach simplifies the pricing process by considering the probabilities of different outcomes based on a risk-neutral measure, where the expected return on the option is discounted at the risk-free rate.

The speaker then introduces the Black-Scholes-Merton (BSM) model, which is a widely used mathematical model for pricing options. The BSM model incorporates various factors such as the current stock price, strike price, time to expiration, risk-free interest rate, and volatility of the underlying asset. It assumes that the underlying asset follows geometric Brownian motion and that the market is efficient.

The speaker explains the key components of the BSM model, including the formula for calculating the value of a European call or put option. They emphasize the importance of volatility in option pricing, as higher volatility increases the value of an option due to the potential for larger price fluctuations. The speaker also mentions the role of implied volatility, which is the market's expectation of future volatility implied by the option prices.

Next, the lecture delves into the concept of delta hedging, which is a strategy used to minimize risk by maintaining a neutral position in the underlying asset. Delta measures the sensitivity of an option's price to changes in the price of the underlying asset. By adjusting the number of shares held in the underlying asset, an investor can create a delta-neutral portfolio that is less affected by price movements.

The speaker explains the process of delta hedging using the BSM model and demonstrates how it can effectively reduce risk. They discuss the concept of dynamic hedging, where the hedge is continuously adjusted as the price of the underlying asset changes. This ensures that the portfolio remains delta-neutral and minimizes the exposure to market fluctuations.

In addition to delta hedging, the lecture covers other risk management techniques such as gamma hedging and vega hedging. Gamma measures the rate of change of delta, while vega measures the sensitivity of an option's price to changes in implied volatility. These techniques allow investors to manage and adjust their positions based on changing market conditions and risks.

Towards the end of the lecture, the speaker highlights the limitations and assumptions of the BSM model. They acknowledge that real-world markets may deviate from the model's assumptions, such as the presence of transaction costs, liquidity constraints, and the impact of market frictions. The speaker encourages a cautious approach and emphasizes the importance of understanding the limitations and uncertainties associated with option pricing models.

Overall, the lecture provides a comprehensive overview of trading confidence, hedging strategies, option pricing models, and risk management techniques. It equips learners with essential knowledge and tools to navigate the complex world of financial markets and make informed decisions in trading and investment activities.

  • 00:00:00 In this section, the instructor explains the subjects of trading confidence, hedging, and the necessity of models that will be learned in the course. They go into detail on how to price put options and the concept of hedging. The instructor also covers stochastic processes and how to model asset prices. They introduce Ito's lemma and how it can be used for solving stochastic differential equations. Finally, the instructor gives an example of a training strategy where an investor would like to protect their investment from a potential decrease in the value of a stock. To do this, they can buy insurance to ensure that they have at least a certain amount of money in a worst-case scenario.

  • 00:05:00 In this section, the lecturer discusses the use of put options to protect against downward movements of a stock's price. However, buying a put option can be expensive, especially when the stock's volatility is high, as it is in the case of Tesla. To make the option cheaper, one can decrease the strike price, although this means accepting a lower price for the stock. The lecturer then shows a screenshot from Reuters, which demonstrates the different types of options available in the market, categorized by maturity and strike price, and explains the relationship between strike price and option prices for call and put options.

  • 00:10:00 In this section, the concept of implied volatility is introduced, describing it as a measure of uncertainty in the market. The lower the strike, the higher the implied volatility, and delta is also introduced as a measure of how much an option's value depends on the underlying asset. The video then explains how hedging works and how a ratio exists that results in no movement in the value of a portfolio, providing instantaneous risk-free results, but may also limit potential gains if the stock does not increase in value. Hedging with options is then discussed, and it is explained that it is suitable for those who do not plan to keep their stocks for a long time, though it can be expensive when volatility is high.

  • 00:15:00 In this section, the lecturer discusses options trading as a form of hedging and risk reduction. They explain that options are generally only desirable for short-term investments with a definite maturity, and that using them for long-term investments can be costly. The lecturer also talks about the concept of hedging with calls, and how selling options can be a way to reduce risk for investors who hold a large portfolio of stocks. However, they caution that selling too many calls can reduce the potential upside of owning stocks, and that options trading always carries some degree of risk.

  • 00:20:00 In this section, the video explores commodities, which are raw materials such as precious metals, oil, and food products that are often used as hedges against inflation because their prices are unpredictable but often show seasonal effects. Trading for commodities is mostly done in the future market where deals to buy or sell the commodity at some future time are made. The difference between electricity markets and other commodities is that electricity cannot be stored fully, which makes the market difficult, especially if predictability and rising of a derivative depend on electricity. Energy markets for commodities often deal specifically with the trade and supply of energy and are regulated by national international authorities to protect consumer rights and avoid oligopolies.

  • 00:25:00 In this section, the lecturer discusses the asset class of currencies, otherwise known as the foreign exchange market. It is unique in that individuals cannot buy or sell a particular exchange rate. Instead, they exchange amounts of money from one currency to another. The dollar is considered the base currency, and it is a reserve currency. The foreign exchange market is amongst the most manipulated markets in the world due to Central Banks' access to reserves. They can influence or manipulate exchange rates to strengthen or weaken a currency. The lecturer also talks about a small application in FX markets, where a derivative can be used to hedge against currency risks when doing business abroad.

  • 00:30:00 In this section, the speaker discusses how banks and other financial institutions can buy or sell insurance against fluctuating exchange rates in order to deal with investment uncertainties. When investing abroad, different countries may have different strengths in their currencies and monetary policies that could lead to uncertain returns. Computational finance is focused on managing and calculating the risks involved in these types of investments by modeling these uncertainties and taking numerous factors into account. The speaker also notes that bitcoins can be considered foreign exchange rates, and it is an interesting hybrid product since it is regulated as a commodity, but its quality is determined through its exchange against the US dollar. Additionally, there is volatility in the pricing of bitcoins, making it hard to predict its value in the future.

  • 00:35:00 In this section, the speaker discusses the use of put options to protect profits on Bitcoin investments. The value of a put option depends on how far away the strike is from the current value of Bitcoin, with a higher strike resulting in a higher price for the option. However, playing in this market requires a substantial amount of capital due to the significant amount of money needed to pay for insurance. The volatility of Bitcoin also adds to the uncertainty and cost of investing in options. The speaker also gives a brief history of options and explains that options with longer maturity periods tend to be more expensive than the underlying assets due to the cost of insurance.

  • 00:40:00 In this section of the video, the speaker introduces and explains different types of options, including European, American, Bermuda, and exotic/path dependent options. European options can only be exercised on the expiration/maturity date, while American options can be exercised on any trading day, making them more expensive. Bermuda options have specific exercise dates while exotic/path dependent options are customized and not very liquid. The speaker then discusses various terms related to options, such as maturity, exercise price, portfolio, writer, and financial engineering. The main focus of the lecture series is on pricing options accurately and minimizing risks associated with them. The speaker also simplifies the discussion with a graph and emphasizes the importance of understanding the main factors that drive option pricing.

  • 00:45:00 In this section, the professor discusses the pricing and comparison of stock options using statistical models and regression analysis. The focus is on the perspective of a writer of an option who would like to hedge their position to sell an option and at the same time protect themselves against the risk of the stock going up or down. By hedging a portfolio, a writer can sell an option and receive a value, VC0, and a delta value, which must be matched through the purchase or sale of a certain amount of stocks to hedge against any potential exposure. The writer must consider two scenarios when deciding on delta, whether the stock goes up or down, to minimize risk and maximize profit.

  • 00:50:00 In this section of the lecture, the professor explains how to build a portfolio in a way that is not affected by fluctuations in the market. In order to achieve this, the value of the portfolio should not change regardless of whether the stock goes up or down. The professor uses a simple exercise to determine the delta, which is the difference between the stock up and stock down. Once this is calculated, it can be substituted to determine the value of the option, which is found to be smaller than the price of the volume. This means that the statistical analysis used to predict the stock has nothing to do with the value of an option, which depends on the stock. The difference in the values of options was found to be more important than the probability, which can be related to the higher volatility of the stock driving the price higher.

  • 00:55:00 In this section, the factors that determine the price of an option are discussed, including the current state of the stock, maturity, and volatility. Interest rates also play a role in determining the value of an option. Longer time to expiry and higher volatility increase the chance that an option will be in the money, while output parity states that there is a relationship between calls and puts. By switching between the two, it is possible to evaluate numerically which is more beneficial. There is no need to make any assumption regarding stock when using output parity, and if the relation does not hold, arbitrage exists.

  • 01:00:00 In this section, the lecturer discusses the concept of arbitrage and presents a strategy that involves using information about calls and puts to identify whether an arbitrage exists in the market. The importance of modeling random behavior in the stock market is also emphasized and the two common models, geometric and arithmetic Brownian motion, are introduced. The lecturer highlights how the latter allows stocks to become negative, which is not desirable. Additionally, the concept of return on investment is discussed, and a small experiment is performed using market data from five years to measure percentage returns. The returns are shown to oscillate around zero with occasional jumps up or down.

  • 01:05:00 In this section, the video discusses the use of collected returns to estimate the density of returns over time, which has a mean of zero and a standard deviation of one percent. The empirical cumulative distribution function is compared to a normal distribution, showing that the former has a fatter tail and doesn't go as fast to zero as the one obtained from the empirical distribution. The video then introduces the Wiener process, also known as the Brownian motion, as a common practice to model noise for the purpose of modeling randomness in a stock. The Wiener process has many desirable properties, including zero returns at time t0, stationary independent increments, a normal distribution with mean zero and variance t, and a continuous path without jumps. The video also discusses the two main components of stock modeling: time and volatility, which drive the price and are squared in the model.

  • 01:10:00 In this section, the lecturer explains the definition of a stochastic process and its use in modeling stock prices and returns. A stochastic process is a random variable with two parameters- time and probabilistic space. The lecturer provides a formal definition of a stochastic process as a collection of random variables defined in two dimensions. They also discuss the Geometric Brownian Motion process, which is used to simulate stock prices. The process consists of a drift term and a volatility term, and it can be discretized to model the stock prices at each time step. The lecturer emphasizes the importance of taking into account the time component when modeling stock prices and returns.

  • 01:15:00 In this section of the video, the lecturer discusses stochastic differential equations and the integral form. They go on to describe the Samelson model, which is a process of the form of geometric Brownian motion. This model fits real data quite well for equities and indices when calibrated to path historical realizations. However, it is not suitable for calibration to options, and discrepancies in real data appear to have a greater probability of large rises and falls than the model predicts. This is due to the Gaussian nature of the model, where extreme events cannot happen, and most of the information is within three sigma intervals.

  • 01:20:00 In this section, the speaker discusses various models used for options with an emphasis on the role of volatility as a main driver in these models. The models used for options are determined by volatility, and in addressing issues such as lack of fit in the tails, possible alternative solutions include the inclusion of jumps or stochastic volatility. The speaker also introduces three processes, arithmetic Brownian motion, geometric Brownian motion, and the Ornstein-Uhlenbeck process, with a focus on their features and differences. While arithmetic Brownian motion is simple, stock returns can be negative, making geometric Brownian motion preferable because the values of the process always stay positive. Finally, the Ornstein-Uhlenbeck process is represented by a speedometer version with a long-term mean and a parameter representing the speed at which paths will oscillate around that mean.

  • 01:25:00 In this section, the lecturer discusses the differences between various stochastic processes used in different asset classes, such as geometric Brownian motion being commonly used for stocks since stocks cannot be negative and typically experience exponential growth. The lecture also introduces Ito's Lemma, a tool in finance used to find the solution to a particular stochastic differential equation. The lemma teaches what is the dynamics of a process, given a function of the process, and the lecturer explains how this enables the solving of many differential equations by hand. The main element to remember in dealing with Ito's Lemma is the Ito table.

  • 01:30:00 In this section, the speaker discusses the use of the Ethos table to find the stochastic differential equation for a given process. Ito's lemma is a powerful tool to find the dynamics of a process, given a second process in a function that would like to apply, and it can easily be applied if memorizing the table. The speaker provides an example of a stock process using geometric Brownian motion and logarithmic function to find the dynamics, and through the application of the table, only one element is left in the equation, which is used to find the final solution.

  • 01:35:00 In this section, the speaker discusses the solution for a process of a stock in terms of the Brownian motion and logarithm of a stock process. The logarithm of a stock process has a Gaussian distribution with a constant part and an arithmetic Brownian motion part. The density function for the logarithm of a stock process is found to be a log-normal distribution with mean and variance determined by the parameters of the process. The speaker then explains how different parameters impact the log-normal distribution of the process, such as changes in volatility resulting in a wider distribution.

  • 01:40:00 In this section, the speaker discusses the impact of mu on the variance of a process and the resulting effect on the distribution of the process. A higher mu leads to a fatter tailed distribution and increases the volatility of the process. The speaker then shows a simulated normal process and a log normal process, in which the latter has an asymmetric density and a fatter tail towards upwards. This is reflective of stocks driven by geometric boundary motion and their exponential form of density.
Computational Finance: Lecture 2/14 (Stock, Options and Stochastics)
Computational Finance: Lecture 2/14 (Stock, Options and Stochastics)
  • 2021.02.17
  • www.youtube.com
Computational Finance Lecture 2- Stock, Options and Stochastics▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬This course is based on the book:"Mathematical Modeling...
 

Computational Finance: Lecture 3/14 (Option Pricing and Simulation in Python)



Computational Finance: Lecture 3/14 (Option Pricing and Simulation in Python)

In the lecture, the instructor delves into stock path simulation in Python and explores the Black-Scholes model for pricing options. They discuss two approaches to deriving the arbitrage-free price for options, namely hedging and martingales. The speaker demonstrates how to program martingales and simulate them, highlighting the connection between partial differential equations (PDEs) and Monte Carlo simulation in the pricing framework.

Using the Euler discretization method, the speaker explains how to simulate and generate graphs of stochastic processes. They start with a simple process and employ Ito's lemma to switch from S to X, the logarithm of S. The lecturer then introduces the Euler discretization method and demonstrates its implementation in Python. This method involves discretizing the continuous function and simulating the increments for both drift and Brownian motion, resulting in graphs of simulated paths.

From a computational perspective, the speaker discusses the simulation of paths for option pricing models. Instead of simulating each path individually, they explain the efficiency of performing time slicing and constructing a matrix where each row represents a specific path. The number of rows corresponds to the number of paths, while the number of columns corresponds to the number of time steps. The speaker explains the implementation of the discretization process using the standard normal random variable and emphasizes the importance of standardization for better convergence.

The lecture also covers the simulation of paths for geometric Brownian motion using Python. The speaker illustrates how to fix a random seed for stable simulations and introduces the Black-Scholes model, which involves a stochastic differential equation with drift and parameters such as mu and sigma for modeling asset prices. The speaker emphasizes that the Black-Scholes model is still widely used in the finance industry, particularly for pricing options on stocks. They discuss the concepts of real-world measure and risk-neutral measure, which aid in pricing options based on different outcome probabilities.

Furthermore, the lecture explores option pricing and simulation in Python. The speaker distinguishes between the real-world measure, estimated based on historical data without assuming arbitrage or risk-free conditions, and the risk-neutral measure, which requires certain conditions to hold. They present a trading strategy involving continuous trading in a stock and adjusting the option position to capture the underlying stock's movement. The speaker explains the dynamics of the portfolio using Ito's lemma and derives the stochastic nature of option values through this method.

The speaker also delves into techniques for constructing a hedging portfolio that is independent of Brownian motion. They discuss choosing a delta that nullifies the terms involving Brownian motion, ensuring a delta-neutral portfolio. The speaker highlights the importance of the portfolio yielding the same return as a savings account and introduces the concept of money settings accounts.

Additionally, the lecture addresses the derivation of partial differential equations (PDEs) for option valuation using the Black-Scholes model. The resulting PDE is a second-order derivative with boundary conditions that determine the fair value of an option. The speaker emphasizes that the Black-Scholes model's option pricing does not depend significantly on the drift parameter mu, which can be obtained from calibration or historical data. However, transaction costs for hedging are not considered in this model.

The lecture covers various important concepts within the Black-Scholes model and option pricing. It discusses the assumption of no arbitrage opportunities, leading to a risk-free scenario for the model's application. The speaker explains the concept of delta hedging and how it eliminates the largest random component of a portfolio. Additionally, the speaker introduces gamma as a measure of delta's behavior and emphasizes that every parameter in the model can be hedged. Finally, the lecture explores the determining factors of an option's value, such as time, strike, volatility, and market-related parameters.

In the lecture, the speaker further explores the Black-Scholes model and its application in option pricing. They discuss the assumptions and limitations of the model, including the assumption of constant volatility and the absence of transaction costs. Despite these limitations, the Black-Scholes model remains widely used in the financial industry due to its simplicity and effectiveness in pricing European call and put options.

The speaker introduces the concept of implied volatility, which is the market's expectation of future volatility derived from the current option prices. Implied volatility is a crucial parameter in the Black-Scholes model as it affects the pricing of options. The speaker explains how implied volatility can be obtained from market data using the model and discusses its significance in option trading strategies.

The lecture delves into various option trading strategies, such as delta hedging and gamma trading. Delta hedging involves continuously adjusting the portfolio's composition to maintain a neutral position in relation to changes in the underlying asset's price. Gamma trading focuses on exploiting changes in gamma, which measures how delta changes with respect to the underlying asset's price. These strategies aim to manage risk and maximize profitability in option trading.

The speaker also touches upon other important factors influencing option prices, including time decay (theta), interest rates (rho), and dividend yield. They explain how these factors impact option pricing and how traders can use them to make informed decisions.

Throughout the lecture, Python programming is utilized to demonstrate the implementation of various option pricing models and trading strategies. The speaker provides code examples and explains how to utilize libraries and functions to perform calculations and simulations.

In summary, the lecture provides a comprehensive overview of option pricing and simulation using the Black-Scholes model and related concepts. It emphasizes the practical application of these concepts in Python programming, making it a valuable resource for individuals interested in quantitative finance and options trading.

  • 00:00:00 In this section of the lecture, the instructor discusses stock path simulation in Python and the Black-Scholes model for pricing. He explains the two ways to derive the arbitrage-free price for options, through hedging and martingales, and demonstrates how to program martingales and simulate them. He also discusses the relationship between partial differential equations (PDE) and Monte Carlo simulation in a pricing framework and how to distinguish different measures in a stochastic differential equation. The lecture concludes with a proof for the Black-Scholes model and a demonstration of how to perform pricing using Python.

  • 00:05:00 In this section, the speaker discusses how to simulate and generate graphs of stochastic processes using the Euler discretization method. They begin with a simple process from the previous lecture and use the Ito's lemma to switch from S to X, the logarithm of S. They then explain the Euler discretization method and how to implement it using Python. The method involves discretizing the continuous function and simulating the increments for both drift and Brownian motion. The code shown in the video is used to generate the graphs of the simulated paths.

  • 00:10:00 In this section, the speaker discusses the computational perspective of simulating paths for an option pricing model. Instead of simulating each path individually, it is computationally efficient to perform time slicing and build a matrix where each row corresponds to a particular path. The number of rows is determined by the number of paths, and the number of columns is determined by the number of time steps. The speaker explains the implementation of the discretization of the process using the standard normal random variable, and how standardization helps achieve better convergence.

  • 00:15:00 In this section, the speaker explains how to simulate paths of a geometric Brownian motion using Python, including how to fix a random seed for stable simulations. They also introduce the Black-Scholes model, which includes a stochastic differential equation with a drift, and parameters like mu and sigma, for modeling the price of an asset like a stock. They note that this model is still commonly used in the finance industry, and explain how it can be used to price options on the stock. The speaker also discusses the concept of real-world measure and risk-neutral measure, which help to price options based on the probabilities of different outcomes.

  • 00:20:00 In this section, the lecture discusses option pricing and simulation in Python. The real-world measure is explained as the parameters estimated based on historical data, without assuming anything about arbitrage or being risk-free, while the risk-neutral measure requires arbitrary conditions to hold. A strategy is presented where one holds one option and trades continuously in a stock to hold some shares, buying or selling an option to catch the movement of the underlying stock. The portfolio is consistently rebalanced every day to match its value and hedge against any fluctuations of the underlying stock. Ito's Lemma is applied to find the dynamics of the portfolio, and the value of an option is derived as stochastic through this method.

  • 00:25:00 In this section of the lecture, the speaker discusses substituting the dynamic for stock in order to apply Ito's lemma and handle a square term. They then go on to explain how to build a hedging portfolio that does not depend on Brownian motion, which is achieved by choosing a delta for which all terms around the Brownian motion will be equal to zero. The speaker also discusses how this portfolio must give the same yield as putting all the money on a savings account, and they explain the representation of money via money settings accounts.

  • 00:30:00 In this section, the lecturer explains how to derive a partial differential equation (PDE) for valuing options using the Black-Scholes model. The resulting PDE is a second-order derivative with boundary conditions that can be used to determine the fair value of an option. Interestingly, the model does not depend on the parameter mu, meaning that the drifts obtained from calibration or historical data do not significantly impact option pricing in a risk-neutral framework. However, it is essential to note that transaction costs for hedging are not considered in this model.

  • 00:35:00 In this section, the speaker discusses several important concepts in the Black-Scholes model and option pricing. The first is the assumption that there are no arbitrage possibilities, meaning that the model is applied in a risk-free scenario. The speaker also explains the delta hedge and how it eliminates the largest random component of a portfolio. Additionally, the speaker introduces the importance of gamma, which measures how delta behaves, and how every parameter in the model can be hedged. Finally, the speaker discusses the determining factors of an option's value, including time, strike, volatility, and market-related parameters. One of the most significant findings in the Black-Scholes model is that the pricing equation does not depend on mu, which is not a super important component in option pricing.

  • 00:40:00 In this section, the speaker discusses option pricing and simulation in Python. They analyze a graph displaying different put and call options for SMP with a current value of 38 hundred, varying maturities, and the implied volatility obtained from Black-Scholes implied volatility and delta. They explain that Black-Scholes model, despite its limitations and assumptions, is considered the market standard for option pricing. The speaker then introduces martingales, which offer an alternative way of determining the fair value of an option. They explain the concept of filtration and the three conditions for a stochastic process to be considered a martingale. They note that the third condition is the most important and that martingales are a useful method for high-dimensional BD.

  • 00:45:00 In this section of the video, the concept of martingale and its relationship with fairness and null arbitrage is discussed. The conditions to check whether Brownian motion is a martingale are explained and demonstrated using examples. The independence of increments of Brownian motion and the property of linear expectations are also touched upon. The example involving log normal distribution is introduced, and the main condition that needs to be checked to determine if it is a martingale is explained.

  • 00:50:00 In this section, the lecturer discusses the use of the filtration method to calculate the expectation of e wt-s and confirms that the process given in the previous line satisfies the marginal condition and is a martingale. The main takeaway from this section is that a stochastic integral process is a martingale, and whenever a process defined is an integral with no drift, x t is always a martingale with respect to the filtration. The process with no drift can also be represented in differential form as d x t = d t * d w t.

  • 00:55:00 In this section, the lecturer discusses whether or not a stock price is a martingale. Stocks are typically not martingales because it would be a bad investment if you expect the same amount of money as you have invested in the future. However, if you consider a discounted stock process and discount the future cash flows to today, you would expect the value of the company to be equal to the value you see today. The lecturer applies Ito's lemma and finds out the dynamics for s over m to see if this term is a martingale. Applying the stochastic integral process theorem can determine the conditions under which this holds. The first partial derivative with respect to stock is one over m, and the second derivative is zero, so this term is a martingale.

  • 01:00:00 In this section, the speaker discusses how to switch between measures in order to transform the dynamics from discounted stock process to martingale under Q measure, which is the measure of interest. The speaker shows how to switch the expectation from measurable P measure to Q measure and explains that once we have the process and the measure, we can derive the measure transformation. By enforcing the condition that discounted stock process should be a martingale in Q measure, the speaker cancels out the leading terms and derives the measure transformation to switch between measures.

  • 01:05:00 In this section of the lecture, the instructor discusses the starting point for pricing equations which involves an expectation under risk-neutral measure of a discounted future payoff to today. This forms the market price of a derivative, and the equation for this expression's dynamics involves the market price of risk, which tells the relation between the expected growth of a stock compared to the interest rate, scaled for volatility. The instructor demonstrates how to use Itô's lemma to find the dynamics for this expression, and after simplification, the resulting equation is the same as the expression for PDE in the Black-Scholes equation.

  • 01:10:00 In this section, the speaker explains that when computing an expectation under a risk-neutral measure, it is not allowed to consider a process that is not under the risk-neutral measure. This means that the process used for the expectation should have r to discount it. Therefore, in the process used for the expectation, the drift must always be changed from m to r. The speaker uses Python code to demonstrate how to check whether a stock is a martingale or not and introduces a discount stock value using money saved in accounts. They also increase the number of paths for simulation to improve accuracy but caution against plotting all paths for performance reasons.

  • 01:15:00 In this section, the speaker discusses the connection between Monte Carlo simulation and partial differential equations (PDEs) for option pricing. The speaker presents a generic PDE and emphasizes that the PDE does not depend on μ but on the interest rate, r. To relate pricing with Monte Carlo simulation to solving this PDE, the speaker introduces the Feynman-Kac formula, which establishes the link between PDEs and stochastic processes and offers a method for solving certain PDEs by simulating random paths of a stochastic process. The final condition is also discussed, and the speaker notes that discounting is typically associated with pricing.

  • 01:20:00 In this section, the speaker explains how to calculate the value of a derivative today by discounting the expected future payment and how the risk-free rate is used to discount future cash flows. The speaker also discusses the stochastic process and how to relate it to the partial differential equation (PDE) for the derivative's value. By applying Itô's lemma to the process, simplifying terms, and integrating both sides of the stochastic differential equation, the speaker shows that the expectation of the integral is zero, and this helps to prove the relation between the PDE and the derivative's value.

  • 01:25:00 In this section, the lecturer explains stochastic calculus and its use in option pricing. He shows how the expectation of a stochastic integral involving Brownian motion is always zero, which leads to the value of an option today being equal to the expectation of the payoff of a process at maturity. The lecturer demonstrates how to solve partial differential equations with terminal conditions using stochastic calculus and shows how the solution of an SDE can be obtained by calculating the second moment of the variable and applying it to the pricing equation. Finally, he explains that discounted future value of payoff is always related to the solution for pricing equation, and that the drift of the process is always equal to the drift of the risk-neutral measure.

  • 01:30:00 In this section, the lecturer explains two major approaches to option pricing: the PDE approach and the risk-neutral probability approach. The risk-neutral approach involves changing the probability measure from the true statistical probability to the risk-neutral probability, which is especially important when dealing with martingales. The lecturer also discusses the differences between the measures and when to choose which one, with risk-neutral probability being the probability of a future event or state that both trading parties in the market agree upon. This helps estimate the probabilities associated with a particular event and measure its pricing.

  • 01:35:00 In this section, the speaker explains the concept of risk-neutral probability, which is the probability measured by the market that is used for pricing financial instruments. The risk-neutral probability is not a historical statistic or prediction, but rather it reflects the market's common belief towards the probability of an event happening. The speaker shows how to simulate Monte Carlo simulations using either the Q measure or the P measure. The Q measure is the risk-neutral measure, and it's determined once the price for a contract is established, which tells us the risk-neutral probability assigned to the particular event. The speaker emphasizes the importance of using this probability measure to avoid arbitrage and explains how to estimate the parameters needed for the simulations from market data and historical data.

  • 01:40:00 In this section of the lecture, the concept of drift is discussed in relation to option pricing and simulation in Python. The simulation involves calculating the ratio between the stock at any time and the money saved in accounts, which is a martingale under the risk-neutral measure. The code is plotted and shows that under the B measure, the ratio is not a martingale. The second part of the lecture involves the application of the famous Black-Scholes model to find the option price under the geometric Brownian motion and derive the Black-Scholes formula using a logarithmic transformation and integrating the function. The expectation is calculated under the risk-neutral measure and the value of the derivative is obtained using the Feynman-Kac formula.

  • 01:45:00 In this section, the video explains the process of using the cumulant-generating function to compute the option pricing. It involves transforming the original option pricing integral into a cumulant-generating function version. The transformation provides a normal distribution that is easier to handle than a log-normal distribution. After the substitution, we end up with the Black-Scholes pricing theorem, a famous formula for pricing European call options.
Computational Finance: Lecture 3/14 (Option Pricing and Simulation in Python)
Computational Finance: Lecture 3/14 (Option Pricing and Simulation in Python)
  • 2021.03.05
  • www.youtube.com
Computational Finance Lecture 3- Option Pricing and Simulation in Python▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬This course is based on the book:"Mathematical...
 

Computational Finance: Lecture 4/14 (Implied Volatility)



Computational Finance: Lecture 4/14 (Implied Volatility)

In this comprehensive lecture on computational finance, the concept of implied volatility takes center stage, shedding light on its significance in option pricing computations. While the Black-Scholes model serves as a foundation for calculating implied volatility, its limitations and inefficiencies are duly emphasized. The lecture delves into various methodologies for computing implied volatility, notably iterative processes such as the Newton-Raphson method. Additionally, the lecturer explores the challenges associated with modeling option prices and underscores the role of implied volatilities in reflecting market expectations. Throughout the lecture, the crucial importance of comprehending volatility in option pricing and constructing effective hedging portfolios remains a central theme.

The lecture extends its exploration by focusing on the relationship between option prices and implied volatility, with a specific emphasis on liquid out-of-the-money puts and calls. It examines different types of implied volatility skew, encompassing time-dependent volatility parameters and the influence of time dependency on the implied volatility smile. Furthermore, the lecture delves into the limitations of the Black-Scholes model and alternative approaches to handling volatility models, including local volatility models, jump models, and stochastic volatility models. The impact of option maturity on volatility is also elucidated, with shorter maturity options exhibiting a more concentrated distribution around the money level compared to longer maturities, where the smile effect becomes less pronounced.

The professor commences by summarizing the key concepts covered in previous sections, specifically relating to option pricing and volatility modeling. Implied volatility is introduced, highlighting its computation from market data and its role in measuring uncertainty. The algorithm for computing implied volatility is discussed in detail. Furthermore, the limitations and efficiencies of the Black-Scholes model are addressed, along with extensions such as incorporating time-dependent volatility parameters and generating implied volatility surfaces. The lecture also touches upon the downsides of relying solely on the Black-Scholes model and introduces alternative models like local volatility and stochastic volatility. Emphasis is placed on the need to specify an appropriate model for pricing contingent claims and the significance of constructing a hedging portfolio consisting of options and stocks to arrive at a pricing partial differential equation (PDE).

The speaker proceeds to explore the utilization of expectations in solving partial differential equations, specifically when dealing with a deterministic interest rate and the necessity of taking expectations under the risk-neutral measure. The pricing equation for European call and put options is presented, relying on an initial stock normal cumulative distribution function (CDF) evaluated at points d1, which depends on model parameters, along with an exponent involving the interest rate over the time to maturity. The lecture explains that this formula can be easily implemented in Excel.

Next, the lecturer elaborates on the parameters required for the Black-Scholes model, which serves as a tool for estimating option prices. These parameters encompass time to maturity, strike, interest rate, current stock value, and the volatility parameter, sigma, which needs to be estimated using market prices. The lecturer emphasizes the one-to-one correspondence between option price and volatility, highlighting that an increase in volatility implies a corresponding increase in option price, and vice versa. The concept of implied volatility is then discussed, emphasizing its calculation based on mid-price and its significance within the Black-Scholes model.

The lecture further delves into obtaining implied volatility from models with multiple parameters. It is noted that regardless of the chosen model, it must pass the Black-Scholes model's test. However, using the Black-Scholes model to price all options simultaneously becomes impractical due to differing implied volatilities for each strike. The lecture also points out that implied volatilities tend to increase with longer option maturities, signifying greater uncertainty. An example is provided to demonstrate the computation of implied volatility using market data and a standard call option on 100 shares.

The concept of implied volatility is further expounded upon by the lecturer. Historical data on an option is used to estimate its volatility using the Black-Scholes equation. However, the lecturer highlights that while this estimation provides a certain price for the option, the market may have priced it differently due to its forward-looking nature, contrasting with the backward-looking historical estimation. Despite this discrepancy, the relationship between the two volatilities is still utilized for investment purposes, although the lecturer advises caution against purely speculative reliance on this relationship. The lecture then proceeds to explain how to calculate implied volatility using the Black-Scholes equation given the market price and other specifications of an option. However, the lecturer acknowledges that the concept of implied volatility is inherently flawed as there is no definitive correct value, and the model used is an approximation rather than a true representation of option pricing.

The lecturer proceeds to explain the process of finding implied volatility by employing the Newton-Raphson method, an iterative approach. This method involves setting up a function based on the Black-Scholes equation and the market price to solve for sigma, the implied volatility. The lecturer highlights the use of a Taylor series expansion to calculate the difference between the exact solution and the iteration, with the objective of finding a function where the Black-Scholes implied volatility matches the market implied volatility. The ability to compute implied volatility rapidly in milliseconds is crucial for market makers to identify arbitrage opportunities and generate profits.

The concept of the iterative process for computing implied volatility using the Newton-Raphson method is introduced. The process entails multiple iterations until the function g approaches zero, with each new step estimated based on the previous one. The lecturer emphasizes the significance of the initial guess for the convergence of the Newton-Raphson method. Extreme out-of-the-money options or options close to zero can present challenges as the function becomes flat, resulting in a small gradient that hinders convergence. To overcome this issue, practitioners typically define a grid of initial guesses. The algorithm approximates the function using its tangent line and calculates the x-intercept, with steeper gradients leading to faster convergence.

Furthermore, the lecturer explains the implementation of the Newton-Raphson algorithm for calculating the implied volatility of an option. The algorithm relies on the Black-Scholes model, with input parameters including the market price, strike, time to maturity, interest rate, initial stock volume, and initial volatility parameter. The convergence of the algorithm is analyzed, and an error threshold is determined. The code is demonstrated using Python, with necessary methods and definitions prepared in advance, leveraging the NumPy and SciPy libraries.

The lecture elaborates on the computation of implied volatility, emphasizing the inputs required for this calculation, such as the option value and the derivative of the call price with respect to the volatility parameter, known as Vega. The core of the code involves the step-by-step process of computing implied volatility, with the lecturer providing explanations on the various parameters involved and their significance. The lecture concludes with a brief demonstration of the iterative process employed to compute implied volatility.

The speaker also addresses the topic of error in calculating implied volatility and how it is determined by the differences between iterations. The output chart showcases the implied volatility obtained for a call price, strike, maturity, and other parameters. The speaker illustrates how convergence varies with different initial guesses for volatility, underscoring the importance of this process in industry calibration. The initial guess must be close to the actual implied volatility for the model to converge successfully. Industry practitioners typically attempt different initial volatilities until a suitable convergence is achieved, and that particular volatility value is chosen.

the lecture dives deeper into the interpretation of implied volatilities. Implied volatilities can provide insights into market expectations and sentiment. When the implied volatility is high, it suggests that market participants anticipate significant price fluctuations, which may indicate uncertainty or perceived risk in the underlying asset. Conversely, low implied volatilities indicate expectations of relatively stable prices.

The lecture emphasizes that implied volatilities are not a measure of future volatility but rather a reflection of market pricing. Implied volatilities are influenced by various factors such as supply and demand dynamics, market sentiment, and market participants' risk appetite. Therefore, it is crucial to interpret implied volatilities in the context of other market indicators and fundamental analysis.

The lecturer also highlights the concept of implied volatility surfaces or volatility smiles. Implied volatility surfaces represent the relationship between implied volatilities and different strike prices and maturities. In certain market conditions, the implied volatilities of out-of-the-money options may be higher or lower than those of at-the-money options. This curvature in the implied volatility surface is known as the volatility smile or smirk. The lecture explains that the volatility smile indicates market participants' perception of the probability of extreme price movements, such as large downside risks or unexpected positive events.

Moreover, the lecture covers the concept of implied volatility term structures. Implied volatility term structures depict the relationship between implied volatilities and different maturities for a specific option. The lecturer explains that implied volatility term structures can exhibit different shapes, such as upward sloping (contango), downward sloping (backwardation), or flat curves. These term structures can provide insights into market expectations regarding future volatility over different time horizons.

Additionally, the lecture delves into the limitations and challenges associated with implied volatilities. It emphasizes that implied volatilities are derived from option prices, which are influenced by various factors and assumptions, including interest rates, dividend yields, and the efficient market hypothesis. Therefore, implied volatilities may not always accurately reflect the true underlying volatility.

Furthermore, the lecture discusses the concept of historical volatility and its comparison to implied volatility. Historical volatility is calculated based on past price movements of the underlying asset, while implied volatility is derived from option prices. The lecturer notes that historical volatility is backward-looking and may not fully capture future market expectations, while implied volatility incorporates forward-looking information embedded in option prices.

Lastly, the lecture concludes with a summary of the key points covered. It emphasizes the importance of understanding implied volatility, its calculation methods, and its interpretation in the context of option pricing and market expectations. The lecturer encourages further exploration and research in this area, given its significance in financial markets and investment decision-making.

  • 00:00:00 In this section of the lecture, the professor begins by summarizing what has been learned so far about option pricing and modeling volatility. He explains the concept of implied volatility and how it is computed from the market, as well as its importance in measuring uncertainty. The algorithm for computing implied volatility is also discussed. Additionally, the limitations and efficiencies of the Black-Scholes model are covered, along with extensions of the model such as introducing a time-dependent volatility parameter and generating implied volatility surfaces. Finally, the downside limitations of the Black-Scholes model and alternative models like local volatility and stochastic volatility are mentioned. The lecture emphasizes the need to specify a model that can be used to price contingent claims, and the importance of constructing a hedging portfolio consisting of an option and stocks to arrive at a pricing PDE.

  • 00:05:00 In this section, the speaker discusses the use of expectations in solving partial differential equations, specifically in the case of a deterministic interest rate and the need to take the expectation under the wrist neutral measure. The process used in the expectation must be under the murder Q measure, which is discounted under the P measure. The pricing equation for European call and put options is shown to rely on an initial stock normal cdf evaluated at points d1, which is a function of model parameters, and an exponent of interest rate over the time to maturity. The formula can be easily implemented in Excel.

  • 00:10:00 In this section, the speaker explains the parameters required for the Black-Scholes model, which is used to estimate option prices. These parameters include time to maturity, strike, interest rate, current stock value, and the volatility parameter, sigma, which needs to be estimated using market prices. The speaker emphasizes that there is a one-to-one correspondence between option price and volatility, and that an increase in volatility implies an increase in option price, and vice versa. The lecture then discusses implied volatility, which is calculated based on mid-price and is an important element in the Black-Scholes model.

  • 00:15:00 In this section, the lecturer discusses how to get implied volatility from a model that has many parameters. He notes that regardless of the model chosen, it must always pass the black-scholes model. However, the black-scholes model cannot be used to price all options at the same time because the implant volatility for every strike is different. The lecturer also points out that the longer the maturity of an option, the higher the implied volatilities become, making them more uncertain. Finally, the lecture gives an example of how to compute the implant volatility from market data and a standard call option on 100 shares.

  • 00:20:00 In this section, the lecturer discusses the concept of implied volatility. He begins by using historical data on an option to estimate its volatility using Black-Scholes equation. He then notes that while this gives a certain price for the option, the market may be pricing it differently due to the fact that the market is forward-looking, whereas the historical estimation is backward-looking. He explains that people still use the relationship between the two volatilities for investment purposes, but he warns against this being purely speculative. Finally, he explains how to use Black-Scholes equation to calculate the implied volatility of an option given its market price and other specifications. However, he notes that the concept of implied volatility is inherently flawed as there is no way to know the correct number and the model used is not the real model for option pricing.

  • 00:25:00 In this section, the lecturer explains the process of finding implied volatility by calculating the inverse of the option pricing model using Newton-Raphson's approach. This involves setting up a function for the Black-Scholes equation and the market price to find sigma, which is the implied volatility. To do so, they use a Taylor series expansion to calculate the difference between the exact solution and the iteration, with the goal being to find a function where Black-Scholes implied volatility equals market implied volatility. Market makers rely on fast computation of implied volatility in milliseconds to identify arbitrage opportunities and make a profit.

  • 00:30:00 In this section, the concept of iterative process to compute implied volatility using the Newton-Raphson method is introduced. The process involves computing an iteration multiple times until the function g is close enough to zero, with each new step estimated on the previous one. However, initial guess is a crucial factor to the convergence of the Newton-Raphson method. If the option value is extremely out of the money or too close to zero, the function becomes very flat, and the gradient becomes too small to converge. People usually define a grid for initial guesses to overcome the problem of initial guess. The algorithm approximates the function by its tangent line and computes the x-intercept in the standard line, and steeper the gradient faster the convergence.

  • 00:35:00 In this section of the lecture, the speaker explains the implementation of the Newton-Raphson algorithm for calculating the implied volatility of an option. The function to be optimized is the Black-Scholes model, with the input parameters being the market price, strike, time to maturity, interest rate, initial stock volume, and initial volatility parameter. The algorithm relies on two evaluations: the target function and its first derivative, which is known as Vega. The convergence of the algorithm is analyzed, and an error tier is derived. The code is implemented in Python, with the necessary methods and definitions prepared beforehand, and relies on the NumPy and SciPy libraries.

  • 00:40:00 In this section, the lecturer explains the process of computing implied volatility. The inputs required for this computation include the option value and the derivative of the call price with respect to the volatility parameter. The Vega parameter, which is the sensitivity of the option value to the volatility parameter, is also discussed. The core of the code involves the computation of implied volatility and the lecturer walks through the process step-by-step. They also explain the various parameters involved in the computation and their significance. The lecture concludes with a brief demonstration of the iterative process used to compute implied volatility.

  • 00:45:00 In this section, the speaker discusses the error in calculating implied volatility and how it is determined by the difference between iterations. The output chart shows the implied volatility that was found for a call price, the strike, maturity and other parameters. The speaker also shows how the convergence changes with different initial guesses for volatility and how this process is important in industry calibration. The initial guess must be close to the real implied volatility or the model will not converge. Industry practitioners try different initial volatilities until the model succeeds and that volality is chosen.

  • 00:50:00 In this section, the lecturer discusses the use of implied volatilities in the calculation of option prices. They note that the problem lies in the initial volatility being close to zero, which makes gradient search ineffective. The lecture also examines how implied volatilities can indicate what kinds of shapes the market will expect and how to calculate if option prices are correct. The lecturer concludes by asserting that one should always use strike equals to zero while checking option prices.

  • 00:55:00 In this section, we learn about the challenges of modeling option prices and how the flexibility of Black-Scholes model is limited when fitting two implied volatilities with only one parameter, especially when implied volatilities are not constant anymore. However, Black-Scholes model is still used when it's good enough to fit a single option with one particular strike, as it can be calibrated to the price which is given in the market. We also learn that when plotting implied volatilities against a set of strikes, there are typically three different shapes that can be observed, with the most common one being the implied volatility smile, where the lowest point of the smile may be located in a region around the lowest point, but it doesn't mean it's necessarily the implied volatility.

  • 01:00:00 In this section of the lecture, the relationship between option prices and implied volatility is discussed, with a focus on the most liquid out-of-the-money puts and calls. The lecture explains how option prices increase as they move further out-of-the-money, and as a result, the difference between the market price and model price (implied volatility) also increases. The lecture also covers different types of implied volatility skew, including one where the implied volatility increases slightly as you move further away from the at-the-money option. The lecture ends with a discussion on how to improve the Black-Scholes equation by using time-dependent volatility parameters.

  • 01:05:00 In this section, the video discusses the impact of time dependency on implied volatility and how it affects the generation of the implied volatility smile. It is not possible to generate the implied volatility smile with time-dependent volatility for different strikes, but it is possible to have implied volatility term structure
    where the volatility impact varies for different lengths of options. The video also shows how to compute implied volatility and generate paths with time-dependent volatility and how it affects the Black-Scholes implied volatility equation. The video also shows an example of fitting different volatility levels for two options with different maturities.

  • 01:10:00 In this section, the speaker explains how implied volatility changes based on different strikes and maturities using graphs. They introduce the concept of implied volatility surface, which is an important element in discussing volatilities and stochastic volatility models. They then discuss the relationship between the maturity of an option and its volatility, explaining that short maturity options have a more concentrated distribution around the money level, while longer maturities diffuse over time and the smile effect becomes less pronounced. Lastly, they point out that for longer maturities, the distribution of the option becomes much broader, signifying more uncertainty.

  • 01:15:00 In this section, the video discusses the different shapes of implied volatility, which vary based on the maturity of the contract and other factors. The Black-Scholes model is limited because it can only calibrate to one point in the grid, so any volatility outside of the money level will be flat. While the Black-Scholes model is not ideal for more complicated payoffs or contracts, it is still important as it gives insight into the pricing of derivatives, construction of replicating portfolios, hedging, and simulating market movements. Despite its limitations, the Black-Scholes model is a fundamental model in finance.

  • 01:20:00 In this section, the speaker talks about the limitations of the Black-Scholes model in reality. He highlights that although hedging requires continuously rebalancing a portfolio to give the same rate of return as investing in a money savings account, this is impractical as buying and selling stocks hundreds of times a day would be very expensive due to transaction costs. As a result, hedging happens on a much less frequent basis, depending on market behavior, and the transaction costs and less frequent hedges are not taken into account in the Black-Scholes model. Furthermore, empirical studies of financial time series have revealed that the normality assumption of asset prices cannot capture heavy tails. This means that the probability assigned to extreme events is very low, and this is not well captured by the log-normal distribution of the Black-Scholes model.

  • 01:25:00 In this section of the lecture, the instructor explains the different approaches to handling volatility models. The first approach discusses the local volatility models, which is a simple extension of the actual model. The local volatility model's function is called local volatility function and is constructed using market data. The second approach, which will be discussed in the next lecture, is a model of jumps, enabling the generation of smile and skew effects. The third approach involves stochastic volatility, an advanced extension of local volatility, utilizing a stochastic differential equation to drive volatility.
Computational Finance: Lecture 4/14 (Implied Volatility)
Computational Finance: Lecture 4/14 (Implied Volatility)
  • 2021.03.12
  • www.youtube.com
Computational Finance Lecture 4- Implied Volatility▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬This course is based on the book:"Mathematical Modeling and Computa...
 

Computational Finance: Lecture 5/14 (Jump Processes)



Computational Finance: Lecture 5/14 (Jump Processes)

The lecture progresses to explore ways to enhance the Black-Scholes model by incorporating jumps in the stock process, transitioning from a diffusive model to a jump-diffusion model. The instructor begins by explaining the inclusion of jumps in the stock process and providing a definition of jumps. They then demonstrate a simple implementation of a jump process in Python, emphasizing the need to handle jumps in a stochastic process for stocks while ensuring the model remains under the q measure.

Furthermore, the lecture delves into the implications of introducing jumps in pricing and how it affects the pricing PDE (Partial Differential Equation), introducing additional integral terms. The discussion extends to the impact of different jump distributions on implied volatility shapes and the utilization of concepts such as expectation iterated expectations, the tower property of expectation, and characteristic functions for jump processes when dealing with complex expectations.

The lecturer emphasizes the practicality of jump processes in pricing options and calibrating models, highlighting their realism and ability to accommodate heavy tails, as well as control the kurtosis and asymmetry of lock and turn density. By incorporating a jump process, a better fit to the implied volatility smile or implied volatility skew can be achieved, making jump processes a more favorable alternative to the Black-Scholes model.

Shifting focus, the lecture introduces the concept of jump processes represented by a counting process, which are uncorrelated to Brownian motion. These processes are modeled using a random Poisson process, characterized by initial zero value and independent increments following a Poisson distribution. The rate of the Poisson process determines the average number of jumps in a specified time period. The lecture explains how to calculate the average number of jumps within a given interval for jump processes using notation and expectations.

In computational finance, the lecturer discusses the simulation of jump processes, noting that the jump magnitude cannot explode and outlining the technical assumptions associated with it. The process involves defining matrices and parameters for simulating independent increments using a Poisson distribution for each increment of the jump process. The lecture also covers the utilization of the Poisson process in the Ethos lemma to extend the dynamics of jump processes for stock pricing. Within the context of computational finance, the lecture introduces and explains the concept of jump processes. It defines the term "t-minus" as the time just before a jump occurs in a process and explores the dynamics of the process through the Ethos lemma and the calculation of derivatives with respect to time. The relationship between the jump size and the resulting adjustment in the function "g" is discussed, emphasizing the practical relevance of these concepts in modeling stochastic processes. The lecture also highlights the importance of considering the independence of jump processes and diffusive processes when modeling stock market behavior.

To derive the dynamics of a function "g" in a model incorporating both jump and diffusion processes, the lecture focuses on the behavior of high diffusion complexity and the application of Ito's lemma. Ito's lemma is used to handle cross terms, such as dxpt squared, in the context of increased model complexity. Once all the elements, including drift, diffusion, and jumps, are combined, the dynamics of "g" can be derived using Ito's lemma. The extension of the Ito table is also touched upon, emphasizing the differences between a Poisson process and Brownian motion. The lecture concludes by outlining the process of deriving the dynamics for a function "g" that incorporates both jump and diffusion processes.

Moving forward, the lecture describes the process of obtaining the dynamics of a stock with jump and Brownian motion under the Q measure. This process involves defining a new variable and determining its dynamics, ensuring that the expectation of the dynamics is zero. The jump component is assumed to be independent of all other processes, resulting in an expression that includes terms for drift, volatility, and expectation of J minus one. This expression is then substituted into the equation for the Q measure, ensuring that the dynamics of ST over the money savings account is a martingale.

The instructor proceeds to discuss how to derive a model with both diffusion and jumps, providing an example to illustrate the paths of a model with two components: diffusive and jump. The diffusive part represents continuous behavior, while the jump element introduces discontinuity, allowing for the representation of jump patterns observed in certain stocks. The instructor also covers the parameters for the jump and the volatility parameter for Brownian motion, along with the initial values for the stock and interest rates. To further enhance understanding, the instructor demonstrates how to program the simulation and plot the resulting paths.

The lecture then moves on to explain the expectation of e to the power of j, which is analytically calculated as the expectation of a log-normal distribution. The simulation of Poisson increments driven by c times pi times dt is performed, with z representing increments for a normal distribution and j representing the jump magnitude. The dynamics of the jump diffusion process involve both partial differential equations and integral differential equations, where the integral part represents the expectation of jump sizes. The pricing equation can be derived through portfolio construction or through the characteristic function approach, and the parameters need to be calibrated using option prices in the market.

In the context of portfolio construction, the lecture describes the process of constructing a portfolio comprising a sold option and a hedge with an underlying stock. By ensuring that the portfolio's dynamics increase at the same rate as the money savings account, a pricing differential equation can be derived. To achieve the desired dynamics, the stock divided by the money savings account must be a martingale. The lecture then derives the condition for mu, demonstrating that once the dynamics are established, the dynamics of v can be derived. This information is then used to compute expectations and derive the dynamics of v.

The lecturer further explores the equation for a first-order derivative with respect to time, which is also first-order with respect to x and includes an expectation for a value of a contract at time t with a jump. This leads to an integral term due to the presence of an expectation, resulting in a partial integral differential equation (PID) that is more challenging to solve than pure PDEs. The solution involves finding the analytical expression for the expected value, which may sometimes be expressed in terms of infinite series. The importance of boundary conditions and the transformation of PIDs into log transformations for improved convergence are also discussed.

Continuing the discussion on jump processes, the lecture focuses on the transformation of jump processes in the case of PID and PID under the deluxe option. The lecture presents two common approaches for specifying the jump magnitude, namely the classical merchants model and the non-symmetric double exponential. While the calibration of the model becomes more complicated with the addition of sigma j and mu j, practicality and industry acceptance often favor models with fewer parameters. The lecture also acknowledges that as the dynamics of jump processes become more complex, achieving convergence becomes challenging, necessitating advanced techniques such as Fourier space or analytical solutions for parameter calibration.

The lecture then proceeds to explain the process of pricing using Monte Carlo simulation for jump diffusion processes. Pricing involves computing the expectation of the future payoff by discounting its present value. While methods like PIDs and Monte Carlo simulation perform well in terms of computational complexity for simulations, they may not be ideal for pricing and model calibration due to the significant increase in the number of parameters when jumps are introduced. The lecture also delves into interpreting the distribution of jumps and intensity parameters and their impact on implied volatility smile and skew. A simulation experiment is conducted, varying parameters while keeping others fixed to observe the resulting effects on jumps and skew.

To analyze the effects of volatility and intensity of jumps on the shape of the implied volatility smile and level, the lecturer discusses their relationships. Increasing the volatility of a jump leads to a higher level of volatility, while the intensity of jumps also affects the level and shape of the implied volatility smile. This information is crucial for understanding the behavior of option prices and calibrating models to real-market data.

The lecture then introduces the concept of the Tower Property and its application in simplifying problems in finance. By conditioning on a path from one process to compute the expectation or price of another process, problems with multiple dimensions in stochastic differential equations can be simplified. The Tower Property can also be applied to problems in Black-Scholes equations with volatility parameters and accounting processes, which often become summations when dealing with jump integrals. The lecturer emphasizes the need for making assumptions regarding parameters in these applications.

Next, the lecturer discusses the use of Fourier techniques for solving pricing equations in computational finance. Fourier techniques rely on the characteristic function, which can be found in analytical form for some special cases. The lecturer walks through an example using Merton's model and explains how to find the characteristic function for this equation. By separating expectation terms involving independent parts, the lecturer demonstrates how to express the summation in terms of expectations, allowing for the determination of the characteristic function. The advantage of using Fourier techniques is their ability to enable fast pricing computations, which are crucial for model calibration and real-time evaluation.

Next, the lecturer discusses the use of Fourier techniques for solving pricing equations in computational finance. Fourier techniques rely on the characteristic function, which can be found in analytical form for some special cases. The lecturer walks through an example using Merton's model and explains how to find the characteristic function for this equation. By separating expectation terms involving independent parts, the lecturer demonstrates how to express the summation in terms of expectations, allowing for the determination of the characteristic function. The advantage of using Fourier techniques is their ability to enable fast pricing computations, which are crucial for model calibration and real-time evaluation.

Throughout the lecture, the instructor emphasizes the importance of understanding and incorporating jump processes in computational finance models. By including jumps, models can better capture the behavior of real-world stock prices and provide more accurate pricing and calibration results. The lecture also highlights the challenges associated with jump processes, such as the complexity of solving integral differential equations and the need for careful parameter calibration. However, with the appropriate techniques and methodologies, jump processes can significantly enhance the accuracy and realism of computational finance models.

  • 00:00:00 In this section, the lecturer explains how to improve the black-scholes model by including jumps in the stock process and moving from a diffusive model to a jump-diffusion model. The discussion starts with how to include jumps in the stock process and the definition of jumps. The lecturer also shows how to perform a simple implementation of a jump process in Python and how to deal with jumps in a stochastic process for stocks to ensure that the model is still under q measure. The inclusion of jumps into pricing introduces additional integral terms, which will be present in pricing pde. The lecture also discusses the impact of different jump distributions on different implied volatility shapes and how to use the expectation iterated expectations, the tower property of expectation, and characteristic functions for jump processes when dealing with complicated expectations. Finally, the lecture covers how to use the Fourier transformation for inverting the characteristic function for calibration of jump diffusion models that have multiple parameters.

  • 00:05:00 In this section, the lecturer discusses extending the model to jumps. The behavior of a stock, such as KLM, cannot be explained by a geometric Brownian motion because they reveal jump patterns. These jumps are observed in the market and could be due to unexpected market events or dividend payments, but often they are related to factors like political conflict or delivery problems of commodities. To better fit the behavior of a stock and multiple strikes for option pricing, a process is needed that includes this jump phenomenon. One such process is a Lévy-based model with jump diffusion, which includes a Brownian motion and a jump part that can explain the jump patterns exhibited by some stocks.

  • 00:10:00 In this section, the lecturer discusses the usefulness of jump processes in pricing options and calibrating models. He explains how jumps are realistic when pricing options and how they allow for better calibration while including heavy tails. Additionally, jump processes can help control the kurtosis and asymmetry of lock and turn density. By building a process that includes a jump, he demonstrates how it can facilitate a better fit to implied volatility smile or implied volatility skew. Overall, jump processes are a superior alternative to the Black-Scholes model.

  • 00:15:00 In this section, the second stochastic process in computational finance is introduced, which is a jump process represented by a counting process. The jump process is uncorrelated to Brownian motion and is modeled with a random Poisson process. The Poisson process has initially a zero value and independent increments with a probability given by the Poisson distribution. The rate of the Poisson process represents the average amount of jumps in a specified time period. The probability of a jump happening during a small time interval is then calculated using the Poisson process and a small o dt. The probability of zero jumps occurring is also discussed.

  • 00:20:00 In this section, the lecturer explains how to calculate the average number of jumps in a given interval for jump processes. The calculation involves substituting the difference between the number of jumps at point s plus dt and x-ps using a short notation of dxp. The expectation of an event is calculated by the expected value times the probability of the event. Additionally, a definition of a compensated Poisson process is introduced, where the expected value of the process is zero. Finally, the lecture mentions that there is typically no correlation between the jump magnitude of a random variable and the process, making it difficult to assess the magnitude of a jump and define when it happened.

  • 00:25:00 In this section, the lecturer discusses jump processes in computational finance. The jump magnitude cannot explode, and there are technical assumptions regarding it. Simulating the paths and realizations of the process involve defining matrices and parameters for a Poisson distribution, which is used to simulate independent increments for each increment of the jump process. The lecture also covers how to use the Poisson process in the Ethos lemma to extend its dynamics for stock pricing.

  • 00:30:00 In this section, the concept of a jump process is introduced and explained within the context of computational finance. The term "t-minus" is defined as a time just before a jump takes place in a process, and the dynamics of the process are explored through ethos lemma and the calculation of the derivatives with respect to time. The relationship between the size of the jump and the resulting adjustment in the function g is discussed, and the practical relevance of these concepts in modeling stochastic processes is highlighted. Additionally, the importance of considering the independence of jump processes and diffusive processes when modeling stock market behavior is emphasized.

  • 00:35:00 In this section of the lecture, the focus is on deriving the dynamics of a function g in a model that has both jump and diffusion processes. The speaker begins by explaining that when the complexity of the model increases due to high diffusion, the derivation of solutions can become significantly more difficult. The speaker then introduces Ito's lemma in order to discuss how it is applied in this context, particularly when dealing with cross terms such as dxpt squared. The speaker then explains that once all the elements (drift, diffusion, and jumps) are put together, the dynamics of g can be derived using Ito's lemma. The extension of the Ito table is also touched upon, with the speaker explaining that the difference between a Poisson process and Brownian Motion becomes apparent. Finally, the speaker outlines the process of deriving the dynamics for a function g that incorporates both jump and diffusion processes.

  • 00:40:00 In this section, the speaker describes the process of arriving at the dynamics of a stock with jump and Brownian motion under the Q measure. The process involves defining a new variable and determining its dynamics, and ensuring that the expectation of the dynamics is zero. The jump component is assumed to be independent of all other processes, and the resulting expression includes terms for drift and volatility along with the expectation of J minus one. The final step involves substituting this process into the equation for the Q measure and ensuring that the dynamics of ST over the money savings account is a martingale.

  • 00:45:00 In this section of the lecture, the instructor explains how to derive a model with diffusion and jumps and gives an example of how the paths of a model with two components of diffusive components and jumps would look like. The process has a diffusive part, which behaves continuously, and a jump element, which makes it discontinuous. The instructor also discusses the parameters for the jump and the volatility parameter for the Brownian motion, as well as the initial values for the stock and interest rates. Finally, the instructor shows how to program the simulation and plot the paths.

  • 00:50:00 In this section of the computational finance lecture, the speaker explains the expectation of e to the power j, which is calculated analytically as the expectation of a log-normal distribution. They then simulate Poisson increments driven by c pi times dt, with z as increments for a normal distribution and j as the jump magnitude. The dynamics of the jump diffusion process involve both partial differential equations and integral differential equations, with the integral part representing the expectation of jump sizes. The pricing equation can be derived through portfolio construction or through the characteristic function approach, with the parameters needing to be calibrated using option prices in the market.

  • 00:55:00 In this section, the lecture describes the process of constructing a portfolio consisting of an option that is sold, and a hedge with underlying stock. By ensuring that the portfolio's dynamics increase at the same rate as the money saving account, a pricing differential equation can be derived. The lecture explains that to achieve the dynamics of the stock and risk information, the stock divided by the money savings account must be a martingale. The lecture then derives the condition for mu, showing that once the dynamics are established, the dynamics of v can be derived. This information is then used to compute expectations and derive the dynamics of v.

  • 01:00:00 In this section, the speaker discusses the equation for a first-order derivative with respect to time, that is first-order with respect to x, and includes an expectation for a value of a contract at time t with a jump. This leads to an integral term due to the presence of an expectation which becomes a partial integral differential equation (PID) since it includes an integral term. The speaker explains that because of this, PIDs can be more difficult to solve than PDEs. The solution involves finding the analytical expression for the expected value, which may sometimes be expressed in terms of infinite series. The speaker also discusses the importance of boundary conditions and the transformation of PIDs into log transformations for better convergence.

  • 01:05:00 In this section, the speaker discusses the transformation of jump processes in the case of pid and pid under the deluxe option. The speaker notes that the specification of jump magnitude j is up to the user but outlines two common approaches: the classical merchants model and the non-symmetric double exponential. While the calibration of the model becomes more complicated with the addition of sigma j and mu j, typically, having fewer parameters is more practical and acceptable in the industry. The speaker notes that if the dynamics of jump processes are too complicated, then achieving convergence becomes problematic, and advanced techniques are required, like Fourier space or even analytical solutions, to calibrate those parameters.

  • 01:10:00 In this section, the speaker discusses how to perform pricing using Monte Carlo simulation for a jump diffusion process, which involves computing the expectation of the future payoff by discounting its value today. While methods like PIDs and Monte Carlo perform well in terms of computational complexity for simulations, they may not be ideal for pricing and model calibration as the introduction of jumps increases the number of parameters significantly. The speaker also explains how to interpret the distribution of jumps and intensity parameters, and their impact on implied volatility smile and skew. Additionally, the speaker conducts a simulation experiment to vary the parameters while keeping others fixed to observe the changes in the jump and skew effects.

  • 01:15:00 In this section, the lecturer discusses the effects of volatility and intensity of jumps on the shape of the implied volatility smile and level. Increasing the volatility of a jump leads to a higher level of volatility, while the intensity of jumps also affects the level and shape of the implied volatility smile. The lecture then moves on to discuss the tower property for expectations and how it can be used to handle jumps and integrals. The tower property for expectations allows for simplification and easier handling of expectation expressions, making it a useful tool in computing expectations involving jumps.

  • 01:20:00 In this section, the lecturer discusses the Tower Property and applies it to simplify problems in finance. By conditioning on a path from one process to compute the expectation or price of another process, problems with multiple dimensions in stochastic differential equations can be simplified. The Tower Property can also be applied to problems in Black-Scholes equations with volatility parameters and accounting processes, which often become summations when dealing with jump integrals. The lecturer emphasizes that assumptions must be made regarding parameters in these applications.

  • 01:25:00 In this section, the lecturer discusses the use of Fourier techniques for solving pricing equations in computational finance. Fourier techniques rely on the characteristic function which can be found in analytical form for some special cases. The lecturer walks through an example using Merton's model and explains how to find the characteristic function for this equation. By separating expectation terms involving independent parts, the lecturer shows how to express the summation in terms of expectations and thus find the characteristic function. The advantage of using Fourier techniques is that they allow for extremely fast pricing computations, which is crucial for model calibration and real-time evaluation.

  • 01:30:00 In this section, the lecturer discusses a formula that links the Jump Process to a Fourier Transform. Using conditional expectation, the lecturer simplifies the formula into a characteristic function that involves the expectation of exponents. The new expression resembles closely the definition of an exponent. Further simplification results in a compact expression of the characteristic function, which will be utilized in evaluating Fourier techniques.
Computational Finance: Lecture 5/14 (Jump Processes)
Computational Finance: Lecture 5/14 (Jump Processes)
  • 2021.03.19
  • www.youtube.com
Computational Finance Lecture 5- Jump Processes▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬This course is based on the book:"Mathematical Modeling and Computation...
 

Computational Finance: Lecture 6/14 (Affine Jump Diffusion Processes)



Computational Finance: Lecture 6/14 (Affine Jump Diffusion Processes)

The lecturer provides insights into the selection of pricing models within financial institutions, focusing on the distinction between the front office and the back office. The front office handles trading activities and initiates trades, which are then transferred to the back office for trade maintenance and bookkeeping. The lecturer emphasizes the need to consider various factors, including calibration, risk assessment, pricing accuracy, and computational efficiency when choosing a pricing model. Additionally, the concept of characteristic functions and affine jump diffusion processes is introduced as model classes that allow for efficient pricing evaluation. These models are capable of fast pricing calculations, making them suitable for real-time trading. The lecture also delves into topics such as currency function derivation, framework extension through jump incorporation, and the workflow of pricing and modeling in financial institutions.

The importance of understanding jump processes and their impact on pricing accuracy is highlighted throughout the lecture, along with the challenges involved in solving integral differential equations and calibrating model parameters. By leveraging appropriate techniques and methodologies, computational finance models can be enhanced to better reflect real-world stock price behavior and improve pricing and calibration results.

Furthermore, the speaker emphasizes the role of the front office in financial institutions, particularly in designing and pricing financial products for clients. The front office is responsible for selecting the appropriate pricing models for these products and ensuring that the trades are booked correctly. Collaboration with the back office is crucial to validate and implement the chosen models, ensuring their suitability for the institution's risks and trades. The primary objective of the front office is to strike a balance between providing competitive prices to clients and managing risks within acceptable limits while ensuring a steady flow of profits.

The speaker outlines the essential steps involved in successful pricing, starting with the specification of the financial product and the formulation of stochastic differential equations to capture the underlying risk factors. These risk factors play a critical role in determining the pricing model and the subsequent calculation of prices. Proper specification and modeling of these risk factors are crucial for accurate pricing and risk management.

During the lecture, different methods of pricing are discussed, including exact and semi-exact solutions, as well as numerical techniques such as Monte Carlo simulation. The speaker highlights the importance of model calibration, where the pricing model's parameters are adjusted to match market observations. Fourier techniques are introduced as a faster alternative for model calibration, allowing for efficient computation of model parameters.

The lecture also compares two popular approaches for pricing in computational finance: Monte Carlo simulation and partial differential equations (PDEs). Monte Carlo simulation is widely used for high-dimensional pricing problems, but it can be limited in accuracy and prone to sampling errors. PDEs, on the other hand, offer advantages such as the ability to calculate sensitivities like delta, gamma, and vega at a low cost and smoothness in the solutions. The speaker mentions that Fourier-based methods will be covered in future lectures as they offer faster and more suitable pricing approaches for simple financial products.

The concept of characteristic functions is introduced as a key tool for bridging the gap between models with known analytical probability density functions and those without. By using characteristic functions, it becomes possible to derive the probability density function of a stock, which is essential for pricing and risk assessment.

Throughout the lecture, the importance of calibration is emphasized. Liquid instruments are used as references for calibration, and their parameters are then applied to price more complex derivative products accurately. The lecturer highlights the need to continuously improve and refine pricing models and techniques to adapt to evolving market conditions and achieve reliable pricing results.

In summary, the lecture provides insights into the process of choosing pricing models in financial institutions, focusing on the front office's role, model calibration, and considerations of risk, efficiency, and accuracy. It also introduces various techniques such as Monte Carlo simulation, PDEs, and Fourier-based methods for pricing and model calibration. The concept of characteristic functions and their significance in deriving probability density functions is discussed, along with the challenges and importance of model refinement and adaptation to real-world conditions.

  • 00:00:00 In this section, the lecturer discusses how to choose a pricing model in the context of financial institutions. He explains that the front office is typically associated with trading activities, while the back office focuses on maintaining trades and bookkeeping. When a client wants to buy a derivative, the trade takes place in the front office, and then it is transferred to the back office. The lecturer also highlights the importance of considering different aspects, such as calibration, risks, pricing, and efficiency, when choosing a pricing model. Additionally, he introduces the concept of characteristic functions and affine jump diffusion processes, which are classes of models that allow for the efficient evaluation of prices in a fast way. The lecture also covers how to derive the currency function for the block model and how to extend the framework by adding jumps.

  • 00:05:00 In this section, the speaker discusses the workflow of a financial institution's front office, which primarily deals with designing and pricing financial products for clients. The front office decides on the model to be used for pricing the product and books the trade. It also coordinates with the back-office for the validation and implementation of the models used, ensuring that they are suitable for the risks and trades of the institution. The front office aims to balance the preference of offering better prices to clients while maintaining risks within acceptable limits and profits flowing continuously. The speaker outlines the necessary steps, including the specification of the financial product and the stochastic differential equations for the risk factors involved, for successful pricing.

  • 00:10:00 In this section of the lecture, the speaker discusses the process of pricing and modeling financial products. The process involves specifying risk factors, choosing models suitable for the dimensions, defining the model price, calibrating the model, and performing pricing. The last step involves selling the product and hedging. The speaker also explains the different methods of pricing and has highlighted exact and semi-exact solutions, as well as numerical methods like Monte Carlo simulation. The focus of the lecture is on the fourth point of model calibration, where the speaker talks about using Fourier techniques for faster calibration.

  • 00:15:00 In this section, the speaker discusses different approaches for pricing in computational finance, namely Monte Carlo simulation and PDEs. Monte Carlo simulation is a popular approach, especially for high-dimensional pricing problems since PDEs can be challenging to solve and discretize in multiple dimensions. However, the technique is limited to two dimensions and is associated with random noise and potential sampling errors. PDEs, on the other hand, have the advantage of being able to calculate sensitivities like delta, gamma, and vega at a low cost and are always smooth. The speaker explains that in future lectures, they will focus on Fourier-based methods, which are faster and more suitable for simple products. He also explains how calibration is done based on liquid instruments and how these parameters are then used for pricing more complicated derivative products.

  • 00:20:00 In this section, the instructor discusses the use of Monte Carlo sampling for pricing financial derivatives and the potential issues with sampling error and randomness effects. They also mention the use of alternative methods such as Fourier techniques for calibration and finding the probability density function of a stock. The concept of a characteristic function is introduced to help bridge the gap between models for which the probability density function is known analytically and those for which it is not. The goal is to ultimately find a way to get from the characteristic function to the probability density function of the stock.

  • 00:25:00 In this section, the lecturer discusses the use of Fourier transformations for density recovery, which is particularly useful in the pricing of European-type options. The Fourier transformation method is computationally efficient and not restricted to Gaussian-based models, so it can be used for any random variable that has a characteristic function. The density recovery process involves relating the stochastic process's paths to the observed density at a given time t. The lecturer shows several graphs and discusses the importance of the frequency of signals and the relationship between a process's variance and the number of rotations.

  • 00:30:00 In this section, the speaker discusses the technical aspects of the Fourier transform and its importance in signal analysis. They explain how the Fourier transform can switch a currency function into a frequency domain representation and define a characteristic function as an expectation of an exponent of i. The density is derived from the CDF by taking its derivative, and the characteristic function can be used to find the k-th moment of a variable. Finally, they highlight the useful properties of the Fourier transform, including the relation between the derivative of a characteristic function and the k-th moment.

  • 00:35:00 In this section, the speaker explains the relation between a variable X defined as a logarithm of Y and the characteristic function of log Y of U. By taking a logarithm, X is transformed and the equation simplifies into an integral from 0 to infinity, where a correction function of the logarithm of a variable can calculate every moment of a stock. This method is easier as long as the model being considered doesn't involve negative stocks, which is considered rare. The speaker also mentions that this is useful for calculating Black-Scholes moments analytically. The speaker also introduces the characteristic function for the Black-Scholes model.

  • 00:40:00 In this section, the lecturer explains how to perform a log transformation on a stock variable in computational finance. By converting the variable, the resulting partial differential equation (PDE) becomes simpler to solve. The lecturer provides the updated PDE after the transformation and explains how to find the solution using Duffie-Pan-Singleton theorem. Additional details on exact conditions for the solution are promised to be discussed later.

  • 00:45:00 In this section, the speaker discusses how to solve the partial differential equation for the characteristic function using the Duffy-Pan-Singleton method. To find the solution, derivatives of the transformation from u to x must be calculated and substituted into the PDE. Then, using boundary conditions, the speaker finds solutions for the ordinary differential equations for a and b, which are then substituted into the expression for the characteristic function to arrive at the final result. This method is used to find the characteristic function for the Black-Scholes model, which is a trivial case with a known analytical solution.

  • 00:50:00 In this section, the speaker explains the process of deriving connected functions and finding the values of a and b in Affine Jump Diffusion Processes. The corrective functions require checking if the solution can be applied to the given PDE, followed by determining the number of ODEs to solve to find a and b. In the Black-Scholes model, the characteristic function depends on the initial logarithm of stock value. The class of models that can be considered as Affine Diffusion Processes exists such that the characteristic function has the form of e^(a+bx). The speaker also discusses the conditions required for a system of stochastic differential equations to satisfy the given characteristic function form, including the need for the volatility structure to be represented as a matrix depending on the number of x's and Brownian motions.

  • 00:55:00 In this section, the lecturer explains the conditions for Affine Jump Diffusion Processes. The number of Brownian motions typically corresponds to the number of state variables in the model but there are no strict requirements. The three conditions for these processes are the drift which can only depend linearly on X, a condition on interest rates and a condition regarding the volatility structure. The most crucial and difficult condition is the volatility structure; the matrices obtained after multiplying or squaring volatility must only be linear in X. This condition is not satisfied by the Black-Scholes model, but it can be transformed under log transformation to satisfy the condition.

  • 01:00:00 In this section of the lecture, the professor discusses the concept of the characteristic function in the context of a system of differential equations and applies it to the Black-Scholes model. The characteristic function is defined as a discounted currency function with a boundary condition and a filtration. It can be solved using a solution for the corresponding system of Riccati-type ODEs. The professor provides an example of how to use this approach to solve for the characteristic function in the case of the Black-Scholes model.

  • 01:05:00 In this section, the focus is on the characteristic function for affine jump diffusion processes. By looking at the equation for the discounted characteristic function, it is noted that this term can be taken outside as it is constant. This section also looks at the conditions for fine diffusion and solving the ordinary differential equations for A and B. It is important to choose parameters that can be solved analytically to avoid time-consuming computations. The section also discusses working with more than one dimension and gives an example of modelling two stocks with uncorrelated geometric Brownian motion processes.

  • 01:10:00 In this section, the lecturer discusses the calculation of the characteristic functions for a 2-dimensional affine jump diffusion setting. The lecturer explains that the system of stochastic differential equations includes an additional term, j, and a multi-dimensional Poisson process, which means that jumps are now included in the framework of affine jump diffusion. The lecturer also explains that the terminal condition for the characteristic function includes a boundary condition where a is a constant term without any dependence on x, and b1 and b2 correspond to x1 and x2, respectively. Finally, the equation for the 2d characteristic function is given, where we have a, iu1, and iu2, which are explicitly known.

  • 01:15:00 In this section, the discussion focuses on the independence between the diffusion and jumpy parts in the Affine Jump Diffusion Processes model, where the jump magnitude is independent, and the intensity of the framework does not depend on j. The conditions for this framework are linear drifts, squared volatility, or covariance metrics of the interest rate and the same for the intensity, which means that psi, the intensity for a Poisson process, cannot depend in any other way than linearly on state values. Finally, the section ends with a discussion of the difficulties of using jumps in models due to the increased volatility and fluctuations, which makes calibration and hedging more complicated.

  • 01:20:00 In this section, the speaker discusses the dimensions of the input and output forecasting functions for affine jump diffusion processes. The output forecasting function is typically one-dimensional, representing the marginal distribution for log of stock, and depends on the characteristics of u, including variance and jumps. The dimension of the input forecasting function is related to the number of stochastic differential equations. The speaker then demonstrates the process for an affine jump diffusion model by deriving the stochastic differential equation and partial integral differential equation. They find that the model is not affine because of the squared term, but after performing a log transformation, they are left with a basic differential equation with only one independent random variable, j. They then calculate the derivatives to get the solution for the characteristic function, which is a product of the characteristic function of j and the function of x.

  • 01:25:00 In this section, the lecturer discusses the derivation of the differential equation for affine jump diffusion processes. This is done by taking the terms by x, setting them to zero, and collecting all other terms to put by the derivative of a. The solution for a is then derived and is the same as the one found without using affine diffusion assumptions. However, there are some constant parameters included, such as a0 and l0 which are side p, indicating that the intensity for jumps is constant and not state-dependent.
Computational Finance: Lecture 6/14 (Affine Jump Diffusion Processes)
Computational Finance: Lecture 6/14 (Affine Jump Diffusion Processes)
  • 2021.03.27
  • www.youtube.com
Computational Finance Lecture 6- Affine Jump Diffusion Processes▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬This course is based on the book:"Mathematical Modelin...
 

Computational Finance: Lecture 7/14 (Stochastic Volatility Models)



Computational Finance: Lecture 7/14 (Stochastic Volatility Models)

In the lecture, we delve into the concept of stochastic volatility models as an alternative to Black-Scholes models, which may have their limitations. The speaker emphasizes that stochastic volatility models belong to the class of affine diffusion models, which require advanced techniques to efficiently obtain prices and implied volatilities. The motivation behind incorporating stochastic volatility is explained, and the two-dimensional stochastic volatility model of Heston is introduced.

One important aspect covered is the calibration of models to the entire implied volatility surface rather than just a single point. This is particularly crucial when dealing with path-dependent payoffs and strike direction dependency. Practitioners typically calibrate models to liquid instruments such as calls and puts and then extrapolate to the prices of exotic derivatives. Stochastic volatility models are popular in the market as they allow calibration to the entire volatility surface, despite their inherent limitations.

The lecture also highlights the significance of volatility surfaces in the stock market and the need for appropriate models. If the volatility surface exhibits a steep smile, models incorporating jumps or stochastic volatility are often preferred. Different measures used for pricing options, including the P measure and risk-neutral measure, are discussed. It is noted that while making interest rates time-dependent does not improve smiles or skew, introducing stochastic or local volatility can aid in calibration. The Hassel model, which utilizes mean-reverting square root processes to model volatility, is introduced as well.

The lecture explores the concept of stochastic volatility models in detail. Initially, a normal process and Brownian motion are used to define a stochastic differential equation, but it is acknowledged that this approach fails to accurately capture volatility, especially as it can become negative. The benefits of the Box Inverse Process, also known as the CIR process, are explained as it exhibits fat tails and remains non-negative, making it a suitable model for volatility. The Heston model, with its stochastic volatility structure, is introduced, and the variance (VT) is shown to follow a non-central chi-square distribution. It is clarified that this distribution is a transition distribution, and the Feller's condition is mentioned as a critical technical condition to be checked during model calibration.

The conditions for stochastic volatility models to avoid paths hitting zero, referred to as the Feller's condition, are discussed. The condition is satisfied when two times the product of the kappa parameter and the long-term mean is greater than or equal to gamma squared, the volatility squared. When the condition is not met, paths can hit zero and bounce back, leading to an attainable boundary condition. The properties of non-central chi-squared distributions and their relation to CIR processes are explained. Variance paths and density graphs are provided to illustrate the effects of satisfying or not satisfying the Feller's condition.

The significance of fat-tailed distributions in stochastic volatility models is emphasized, as they are often observed after calibrating models to market data. It is noted that if a model's Feller's condition is not satisfied, Monte Carlo paths may hit zero and remain at zero. The inclusion of correlation in models via Brownian motion is explained, and it is mentioned that jumps are typically considered to be independent. The lecture concludes with a graph depicting the impact of the Feller's condition on density.

The lecture focuses on correlation and variance in Brownian motion. The speaker explains that when dealing with correlated Brownian motions, a certain relation must hold true, and the same applies to increments. The technique of Cholesky decomposition is introduced as a means to correlate two Brownian motions using a positive definite matrix and the multiplication of two lower triangular matrices. This method is helpful in formulating the two processes discussed later in the lecture.

The construction of lower triangular matrix multiplication with independent Brownian motions is discussed, resulting in a vector containing a combination of independent and correlated processes.

Furthermore, the lecturer explains that the characteristic function of the Heston model provides valuable insights into efficient and fast pricing. By deriving the characteristic function, it becomes apparent that all the terms involved are explicit, eliminating the need for complex analytical or numerical computations to solve the ordinary differential equations. This simplicity is considered one of the significant advantages of the Heston model, making it a practical and powerful tool for pricing derivatives.

The speaker emphasizes that understanding the characteristics and implications of each parameter in the Heston model is crucial for effectively managing risks associated with volatility. Parameters such as kappa, the long-term mean, volatility, correlation, and the initial value of the variance process all have distinct impacts on volatility dynamics and the implied volatility surface. By calibrating these parameters to the market and analyzing their effects, practitioners can gain valuable insights into implied volatility smiles and skews, enabling more accurate pricing and risk management.

The lecture highlights the importance of calibrating stochastic volatility models to the entire implied volatility surface rather than just a single point. Path-dependent payoffs and strike direction dependencies necessitate a comprehensive calibration approach to capture the full complexity of market data. Typically, practitioners calibrate the models to liquid instruments such as calls and puts and then extrapolate to exotic derivatives' prices. While stochastic volatility models allow for calibration to the entire volatility surface, it is acknowledged that the calibration process is not perfect and has its limitations.

To further enhance the understanding of stochastic volatility models, the lecturer delves into the concept of fat-tailed distributions, which are often observed when calibrating models to market data. The speaker explains that if a model's feller condition is not satisfied, the Monte Carlo paths may hit zero and remain at zero, affecting the model's accuracy. Additionally, the inclusion of jumps and the independent consideration of correlations in stochastic volatility models are discussed. The lecture provides insights into how these elements influence volatility dynamics and pricing.

The lecture concludes by comparing the Heston model to the Black-Scholes model. While the Heston model offers greater flexibility and stochasticity in modeling volatility, the Black-Scholes model remains a benchmark for pricing derivatives. Understanding the implications of different parameter changes on implied volatility smiles and skews is essential for practitioners to choose the appropriate model for their specific needs. Through comprehensive calibration and analysis, stochastic volatility models such as Heston's can provide valuable insights into pricing and risk management in financial markets.

In addition to discussing the Heston model, the lecture addresses the importance of correlation and variance in Brownian motion. The speaker explains that when dealing with correlated Brownian motions, certain relationships and conditions must hold true, including the use of Cholesky decomposition. This technique allows for the correlation of two Brownian motions using a positive definite matrix and the multiplication of two lower triangular matrices. The lecture emphasizes that this method is essential for formulating processes in multi-dimensional cases and achieving the desired correlation structure.

Furthermore, the lecturer focuses on the construction and representation of independent and correlated Brownian motions in stochastic volatility models. While Cholesky decomposition is a useful tool for correlating Brownian motions, the lecture points out that for practical purposes, it is not always necessary. Instead, Ito's lemma can be applied to incorporate correlated Brownian motions effectively. The lecture provides examples of constructing portfolios of stocks with correlated Brownian motions and demonstrates how to apply Ito's lemma to determine the dynamics of multi-dimensional functions involving multiple variables.

The lecture also covers the pricing partial differential equation (PDE) for the Heston model using a martingale approach. This approach involves ensuring that a specific quantity, called pi, which represents the ratio of volatility over the long-term mean, is a martingale. By applying Ethos Lemma, the lecture derives the equation for the martingale, which involves derivatives and the variance process. The pricing PDE allows for the determination of fair prices for derivative contracts and the use of the risk-neutral measure in pricing.

Moreover, the speaker discusses the impact of different parameters on the implied volatility shape in stochastic volatility models. Parameters such as gamma, correlation, and the speed of mean reversion (kappa) are shown to influence the curvature, skewness, and term structure of implied volatilities. Understanding the effects of these parameters helps in accurately calibrating the models and capturing the desired volatility dynamics.

Throughout the lecture, the speaker emphasizes the importance of model calibration, particularly to the entire implied volatility surface. Calibrating to liquid instruments and extrapolating to exotic derivatives is a common practice among practitioners. Stochastic volatility models, including the Heston model, provide the flexibility to calibrate to the entire volatility surface, enabling better accuracy in pricing and risk management. However, it is acknowledged that model calibration is not without limitations and that subtle differences between models, such as the Heston and Black-Scholes models, should be carefully examined to ensure appropriate pricing and risk assessment.

The lecture provides a comprehensive overview of stochastic volatility models, focusing on the Heston model, its parameter implications, calibration techniques, and the role of correlation and variance in Brownian motion. By understanding and effectively applying these concepts, practitioners can enhance their ability to price derivatives, manage risks, and navigate the complexities of financial markets.

  • 00:00:00 In this section, we learn about stochastic volatility models as an alternative to Black-Scholes models, which may have deficiencies. The inclusion of jumps can fix some issues, but they are difficult to implement and interpret. Stochastic volatility models are in a class of affine diffusion models that require advanced techniques to obtain prices and implied volatilities efficiently. The lecture covers the motivation for stochastic volatility and introduces the two-dimensional stochastic volatility model of Heston. We also cover how to handle populations, correlate Brownian motions, use correlation, extend Ito's lemma to higher-dimensional cases, and price PDEs using martingale approaches, Monte Carlo, and Fourier transformations. The lecture emphasizes the importance of understanding the meaning and impact of each parameter when managing risks associated with a curvature or skew. Lastly, we compare the Heston model against the Black-Scholes model and derive and use the characteristic function for the former.

  • 00:05:00 In this section, the lecturer discusses the importance of calibrating a model to the entire implied volatility surface as opposed to just one point on the surface. They explain that if a payoff is path-dependent and depends on strike direction, calibrating only to one point on the surface is not enough. The lecture goes on to explain how practitioners typically calibrate models to liquid instruments such as calls and puts and then extrapolate to the price of exotic derivatives. The lecturer also explains that stochastic volatility models are popular in the market as they allow practitioners to calibrate to the entire volatility surface, although the calibration is not perfect and has its limitations.

  • 00:10:00 In this section, the speaker discusses the use of stochastic volatility models for calibrating to the volatility surface of the stock market. They explain that if the surface has a steep smile, a model that includes jumps may be needed, or a model like stochastic volatility that models volatility as a random variable. The speaker also explains the different measures used for pricing options, including the P measure and risk-neutral measure. They caution that making interest rates time-dependent does not improve smiles or skew, but making volatility stochastic or local can help with calibration. Finally, they introduce the Hassel model, which uses mean-reverting square root processes to model volatility.

  • 00:15:00 In this section of the lecture, the concept of stochastic volatility models is discussed. The use of a normal process and Brownian motion to define a stochastic differential equation is explained, but it fails to accurately model volatility as it can become negative. The benefits of the Box Inverse Process, also known as the CIR process, are then highlighted as it has fat tails and is non-negative, making it a suitable model for volatility. The Heston model, with its stochastic volatility structure, is introduced, and VT, the variance for the Heston model, is shown to follow a non-central chi-square distribution. It is explained that this is a transition distribution, and the feller's condition is mentioned as an important technical condition to check during model calibration.

  • 00:20:00 In this section, the instructor discusses the conditions for stochastic volatility models to have paths that do not hit zero, also known as the Fellouris condition. The condition is satisfied when two times the product of the kappa parameter and the long-term mean is greater than or equal to gamma squared, the volatility squared. If the condition is not satisfied, paths can hit zero and bounce back, which is known as an attainable boundary condition. The instructor also explains the properties of non-central chi-squared distributions and how they relate to CIR processes. Finally, the instructor provides graphs of variance paths and density for when the Fellouris condition is satisfied and not satisfied.

  • 00:25:00 In this section, the speaker discusses stochastic volatility models and the importance of fat-tailed distributions, which are often observed after calibrating models to market data. The speaker notes that if a model's feller condition is not satisfied, then the Monte Carlo paths might hit zero and stay at zero. The speaker then explains how correlation is included in models via Brownian motion and that jumps are typically considered to be independent. The section ends with a graph that shows the effects of the feller condition on density.

  • 00:30:00 In this section of the video on stochastic volatility models, the speaker discusses correlation and variance in Brownian motion. He explains that if dealing with correlated Brownian motions, a certain relation must hold true, and the same applies for increments. The speaker goes on to describe the technique of Cholesky decomposition, which allows for the correlation of two Brownian motions using a positive definite matrix and the multiplication of two lower triangular matrices. This method will be used to help formulate the two processes in the upcoming discussion.

  • 00:35:00 In this section, the lecturer discusses the construction of lower triangular matrix multiplication with independent Brownian motions, which results in a vector containing a combination of independent and correlated processes. The lecture demonstrates how to determine the correlation between two Brownian motions by simplifying notation and substituting expressions. By using this derivation, the same properties of moments and correlation are preserved, allowing for flexibility in the choice of a suitable decomposition method.

  • 00:40:00 In this section of the lecture, the presenter discusses the switch from using two correlated Brownian motions to using two independent variables, and how correlation can be achieved using Cholesky decomposition. The benefits of dealing with independent Brownian motions are also explained, with sample graphs given to show the differences in negative, positive, and zero correlations. The presenter also gives a code example of how to simulate these correlations using the standardization of samples and generation of paths. The process of generating Brownian motion is also highlighted, with the new realization for Brownian motion being generated from the previous one using an iterative process.

  • 00:45:00 In this section, the video discusses how to simulate multicolored paths for correlated linear motion and how to deal with higher dimensions and non-positive definite correlation matrices. The Cholesky decomposition is used to work out independent Brownian motions with correlation times dt, which can be applied for every dimension. However, if you encounter a non-positive definite correlation matrix, you need to use certain algorithms to map the matrix to a positive definite one. It is also important to specify bounds for your correlation coefficient to make sure it is within a realistic range of -1 and 1. Additionally, the video mentions that in practice, each process in a multi-dimensional case may depend on all correlated Brownian motions, but this is an unusual case.

  • 00:50:00 In this section, the lecturer introduces the Cholesky decomposition, which is a useful tool for dealing with correlating Brownian motions and transforming the system of equations from correlated to uncorrelated. They explain how to represent the system of differential equations in terms of independent Brownian motions by using correlation and the Cholesky decomposition. The lecturer also discusses the technical condition for applying Ethos lemma for vector processes, which is that the function g must be sufficiently differentiable. They provide an example of a multi-dimensional stochastic differential equation and how to differentiate function g with each process in the vector to obtain the dynamics of the process.

  • 00:55:00 In this section, the speaker discusses the representation of independent and correlated Brownian motions in stochastic volatility models. They explain that for practical purposes, it is not necessary to make a Cholesky decomposition and instead, Ito's lemma can be used to apply correlated Brownian motions. The speaker also provides an example of constructing a portfolio of two stocks with correlated Brownian motions and sigma values. They further explain the process of applying Ito's lemma to find the dynamics of a multi-dimensional function involving two or three variables.

  • 01:00:00 In this section of the lecture, the speaker discusses applying the Ethos Lemma to derive the pricing partial differential equation (PDE) for the Heston model using a martingale approach. The pricing PDE requires that the value of a derivative, discounted to the present, must be equal to its expected future value, with the money account driven by the equation for interest rates, and the variance process being stochastically variable. Although deriving a pricing PDE for a variable that is not observable or tradable can be quite involved, the martingale approach is considered one of the simpler methods to achieve this. The pricing PDE is powerful in that it enables the derivation of the fair price for a contract and the risk-neutral measure.

  • 01:05:00 In this section, the speaker explains the martingale approach to pricing derivatives under the stochastic volatility model. The approach involves defining a quantity as pi, which is the ratio of v over m, then ensuring that this quantity is a martingale by applying Ethos Lemma. The speaker derives the equation for the martingale, which involves the simple derivative, one over m dv minus r v over m dt. The economy consists of an asset, a volatility that is not tradable, and a money savings account. To get the solution, the speaker applies Taylor's series and handles the terms with Ito Calculus, which is straightforward. However, calculating the term related to the product of the variance process and the stock is more involved. The final solution involves two brownian motions and an extra term that depends on the correlation between the variance and the stock.

  • 01:10:00 In this section, the lecturer discusses the Heston model's flexibility and stochasticity of the variance process in comparison to the Black-Scholes model. They explain how the model involves multiple parameters, including kappa, the long-term mean, volatility, and correlation, and one more parameter, the initial value of the variance process. They also note that the model's greatest advantage is that each of these parameters has an individual impact on the volatility, allowing for calibration and implanting of volatility smart skew. The lecturer highlights that they will be analyzing the impact of different parameter changes on the implied volatility smiles and skills.

  • 01:15:00 In this section, the lecturer explains the effects of different parameters on the shape of implied volatility in stochastic volatility models. The gamma parameter controls the curvature of implied volatility, and increasing it leads to a steepening shape. Correlations affect the skewness of implied volatility, and negative correlations lead to a smile shape. The speed of mean reversion (kappa) affects the term structure of implied volatility, with larger kappa causing faster convergence to the long-term mean. While kappa has some effect on the level and shape of implied volatility, its primary impact is on the term structure.

  • 01:20:00 In this section, the speaker discusses the impact of different parameters on stochastic volatility models, specifically for controlling the term structure of implied volatilities. The long-term mean and v0 parameters have a similar effect on the model. V bar controls the level if the maturity is given and v0 controls the term structure of implied volatilities. Comparing instant implied volatilities to black-scholes can determine whether a headstone model or black-scholes is more appropriate. Additionally, the speaker uses option prices to illustrate the differences between Hastel and black-scholes models. The control of implied smiles is typically associated with fatter tails in Hastel models, while black-scholes models converge much faster to zero.

  • 01:25:00 keep in mind when calibrating stochastic volatility models and looking at the impact of different parameters on prices. While looking at prices alone cannot determine the implied volatility shape, calibrating to out of the money implied volatility options can give more insight into the model's accuracy. Differences between a model and market can have a significant impact on implied volatilities, especially in out of the money options, so understanding the volatility skew and smile is crucial in model calibration. Subtle differences between the Heston model and Black-Scholes model require examining different elements beyond option prices, such as heavier tails and the volatility shape. The correlation coefficient is also important in linking volatility with stock, and its value is chosen based on market prices for options, not historical data.

  • 01:30:00 In this section, the speaker discusses the Heston model and its superiority over the Black Scholes model in pricing derivatives. However, a challenge arises when trying to determine which quantity in the market represents actual stochastic volatility. To confirm whether the Heston model is affined, the speaker checks whether the state variables and squared covariance matrix are linear in the state vector, which consists of two state variables, s_t and variance_t. The speaker then explains that after performing the logarithmic transformation, they must check whether all terms are linear with respect to the state space vector. Despite the complexity of the model, performing the logarithmic transformation does not significantly complicate the derivations.

  • 01:35:00 In this section, the speaker discusses the instantaneous covariance matrix and states that it helps check whether the process is fine or not. In addition, a characteristic function for the Heston model is derived and it is referred to as a handy decomposition that is relevant to efficient and fast pricing. The speaker acknowledges that it covers a few pages of derivations in the book but highlights that all the terms are explicit and no analytical or numerical computations are necessary for solving the ODEs for the characteristic function. This is seen as one of the greatest advantages of the Heston model.
Computational Finance: Lecture 7/14 (Stochastic Volatility Models)
Computational Finance: Lecture 7/14 (Stochastic Volatility Models)
  • 2021.04.02
  • www.youtube.com
Computational Finance Lecture 7- Stochastic Volatility Models▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬This course is based on the book:"Mathematical Modeling a...
 

Computational Finance: Lecture 8/14 (Fourier Transformation for Option Pricing)



Computational Finance: Lecture 8/14 (Fourier Transformation for Option Pricing)

During the lecture on Fourier Transformation for option pricing, the instructor delves into the technique's application and various aspects. They begin by explaining that Fourier Transformation is utilized to compute the density and efficiently price options for models falling under the class of fine diffusion models. The technique involves computing an integral over the real axis, which can be computationally expensive. However, by employing the inversion lemma, the instructor elucidates how the domain for "u" can be reduced, enabling the computation of the real part of the integral. This approach helps minimize the computational burden associated with expensive computations.

The lecturer further discusses the improvement of this representation using fast Fourier transformation (FFT), which significantly enhances implementation efficiency. By leveraging the properties of FFT, the computational workload is reduced, making option pricing more efficient and faster. The session concludes with a comparison between the Fourier transformation method and the cost method, providing insights into their respective implementation details.

Moving forward, the lecturer delves into the first step in deriving a fast way to calculate density using the Fourier transformation. This step involves dividing the domain into two and extracting the real part, which is a computationally inexpensive operation. Additionally, the lecturer explores the division of complex numbers and the importance of taking the conjugate, as it facilitates more efficient calculations of the characteristic function. The construction of a grid to obtain the density for each "x" value is also discussed, highlighting the significance of selecting appropriate domains and defining boundaries.

The lecture proceeds with an explanation of the calculation of the density of "x" using a Fourier transformation integral and a grid comprising "n" grid points. The instructor emphasizes the need to perform density calculations for multiple "x" values simultaneously. Once the grids are defined, a new integral involving a function named "gamma" is introduced, and trapezoidal integration is employed to approximate the discrete integral. To illustrate this process, the lecturer provides an example of performing trapezoidal integration for a function with an equally spaced grid.

The speaker then delves into the process of configuring parameters to define the grid for Fourier transformation. These parameters encompass the number of grid points, the maximum value of "u," and the relationship between delta "x" and delta "u." Once these parameters are established, integrals and summations can be substituted, enabling the derivation of a function for each "x" value. The lecture includes an equation incorporating trapezoidal integration and characteristic functions evaluated at the boundary nodes of the trapezoid.

The representation of the integral and the importance of employing fast Fourier transformation (FFT) in option pricing are discussed in detail. The speaker explains that by defining a function suitable for input into FFT, practitioners can take advantage of the fast evaluation and implementation capabilities already present in most libraries. The lecturer proceeds to explain the steps involved in computing this transformation and how it can be utilized to calculate integrals. Overall, the lecture underscores the significance of FFT in computational finance and its usefulness in option pricing.

In addition to the aforementioned topics, the lecture explores various aspects related to Fourier transformation for option pricing. These include the use of interpolation techniques to ensure accurate calculations for a discrete number of points, the relationship between the Taylor series and the characteristic function, the application of the cosine expansion method for even functions, and the use of truncated domains to approximate density. The lecture also covers the recovery of density, the numerical results obtained using Fourier expansion, and the pricing representation in the form of matrices and vectors.

Throughout the lecture, the instructor emphasizes the practical implementation of the Fourier transformation method, discusses the impact of different parameters, and highlights the advantages and limitations of the approach. By providing comprehensive explanations and numerical experiments, the lecture equips learners with the knowledge and tools necessary to apply Fourier transformation for option pricing in real-world scenarios.

The lecturer proceeds to discuss the recovery of density function in Fourier Transformation for option pricing. They emphasize the importance of selecting a sufficiently large number of points (denoted as "n") in the transformation to achieve high accuracy density calculations. The lecturer introduces the complex number "i" to define the domain and maximum, with "u_max" determined by the distribution. Furthermore, the lecturer explains the need for interpolation, particularly using cubic interpolation at the grid points "x_i" to ensure accurate calculation of the output density function, even for inputs that do not lie on the grid.

The speaker further explores the benefits of interpolation and its relevance to option pricing using Fourier transformation. While Fourier transformation is advantageous for larger grids, interpolation may be preferred when dealing with larger numbers, as it is comparatively less computationally expensive than FFT. The speaker demonstrates how interpolation works through code examples, highlighting that by adjusting parameters, it becomes possible to calculate sensitivities and obtain Greeks at no additional cost. This feature makes the cosine expansion technique ideal for pricing more exotic derivatives such as barrier and Bermuda options.

Additionally, the lecturer discusses the relationship between the Taylor series and the characteristic function in computational finance. The lecture showcases the one-to-one correspondence between the series and the characteristic function, allowing for direct relations without requiring additional integrals. The lecturer then describes the "cos method" for option pricing, which employs a Fourier cosine expansion to represent even functions around zero. This method involves calculating integrals and coefficients, with the crucial note that the first term of the expansion should always be multiplied by half.

The lecture takes a closer look at the process of changing the domain of integration for function "g" to achieve a finite support range from "a" to "b". The speaker explains the importance of the Euler formula in simplifying the expression and shows how substituting "u" with "k pi divided by b-a" leads to a simpler expression involving the density. The truncated domain is denoted by a hat symbol, and specific values for parameters "a" and "b" are chosen based on the problem being solved. The speaker emphasizes that this is an approximation technique and that heuristic choices are involved in selecting the values of "a" and "b".

Furthermore, the lecture explores the relationship between Fourier expansion and the recovery of density. By taking the real parts of both sides of the equation, the lecture demonstrates the Euler formula that allows expressing the integral of the density as a real part of the characteristic function. This elegant and fast method facilitates finding the relations between integrals of the target function and the characteristic function by using the definition of the characteristic function. The cost method aims to discover these relations to calculate expansion coefficients and recover the density. Although the method introduces errors from infinite summation and the truncation domain, these errors are easy to control.

The lecture then focuses on summarizing the Fourier cosine expansion, which can achieve high accuracy even with a small number of terms. A numerical experiment involving a normal probability density function (PDF) is conducted to examine error generation based on the number of terms, with time measurement included. The code experiment is structured to generate density using the cosine method, defining error as the maximum absolute difference between the density recovered using the cosine method and the exact normal PDF. The cosine method requires only a few lines of code to recover density using the characteristic function, which lies at the heart of the method.

Additionally, the speaker discusses the numerical results of the Fourier expansion, which can be efficiently performed using matrix notation. The error decreases as the number of expansion terms increases, with an error as low as 10^-17 achieved with 64 terms. Using a smaller number of terms can result in oscillations or a poorer fit. The speaker notes that parameters such as the domain and number of expansion terms should be carefully tuned, especially for heavily tailed distributions. Furthermore, the lecture highlights that the log-normal density can also be modeled using the normal characteristic function.

Moving forward, the lecturer delves into the log-normal case and explains how its density differs from the normal distribution. Due to the log-normal distribution, a higher number of expansion terms is typically required. The lecturer emphasizes the importance of choosing an appropriate number of terms for a specific type of distribution and domain.

The lecture emphasizes that the cost method is particularly useful for recovering density and is commonly employed for derivative pricing, such as European-type options that only have a payment at maturity. The lecturer proceeds to explain how pricing works, involving the integration of the product of a density and payoff function under the risk-neutral measure.

As the lecture progresses, the speaker discusses more exotic options, where a connectivity function can be derived and cosines can be used. The term "transition densities" is introduced, referring to the distributions that describe the transition from one point on the time axis to another. The initial value is given in terms of the distribution of a random variable. The presentation further explores truncation of density, where the density is limited to a specified interval. The Gaussian quadrature method is explained, which involves integrating a summation of the real parts of a characteristic function multiplied by some exponent.

The lecture introduces the concept of the adjusted log asset price, which is defined as the logarithm of the stock at maturity divided by a scaling coefficient. An alternative representation of the payoff is presented, and the speaker notes that the choice of "v" directly impacts the coefficient "h_n." This approach can be used for evaluating payoffs for multiple strikes, providing a convenient method for pricing options at various strike prices simultaneously.

Next, the speaker delves into the process of computing the integral of a payoff function multiplied by the density using exponential and cosine functions in Fourier transformation for option pricing. A generic form for the two integrals involved is provided, and different coefficients are selected to calculate various payoffs. The speaker emphasizes the importance of being able to implement this technique for multiple strikes, allowing for the pricing of all strikes at once, which saves time and reduces computational expenses. Finally, the pricing representation is presented in the form of a matrix multiplied by a vector.

The implementation formula for Fourier transformation in option pricing is discussed, involving the vectorization of elements and matrix manipulations. The lecture explains the process of taking "k" as a vector and creating a matrix with "n_k" strikes. Real parts are calculated to handle complex numbers. The characteristic function is of high importance as it does not depend on "x" and plays a key role in achieving efficient implementations for multiple strikes. The accuracy and convergence of the implementation depend on the number of terms, and a sample comparison is shown.

Additionally, the speaker delves into the code used for the Fourier transformation method in option pricing and explains the different variables involved. They introduce the concept of a range for coefficients "a" and "b," typically kept at 10 or 8 for jump diffusion models. The code includes a lambda expression for the characteristic function, which is a generic function adaptable to different models. The speaker emphasizes the significance of measuring time by conducting multiple iterations of the same experiment and calculating the average time. Finally, they illustrate the cost method and how it utilizes the integration range to assume a large volatility.

The lecture continues with an explanation of the process of defining strikes and calculating coefficients for the Fourier transform method of option pricing. The lecturer emphasizes that while tuning the model parameters can lead to better convergence and require fewer terms for evaluation, it is generally safe to stick with standard model parameters. They detail the steps of defining a matrix and performing matrix multiplication to obtain the discounted strike price, comparing the resulting error against that of the exact solution. The lecture highlights that the error depends on the number of terms and the chosen strike range.

The speaker then presents a comparison of different methods for option pricing, including the Fast Fourier Transform (FFT) method and the Cosine method. They explain that the FFT method is more suitable for a large number of grid points, while the Cosine method is more efficient for a smaller number of grid points. The lecturer demonstrates the calculation of option prices using both methods and compares the results.

Moreover, the lecture covers the application of Fourier-based methods in other areas of finance, such as risk management and portfolio optimization. The lecturer explains that Fourier-based methods can be used to estimate risk measures such as Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR). By combining Fourier methods with optimization techniques, it is possible to find optimal portfolio allocations that minimize risk or maximize returns.

The lecture concludes by summarizing the main points discussed throughout the presentation. Fourier transformation techniques provide a powerful tool for option pricing and other financial applications. The Cosine method allows for efficient and accurate pricing of options by leveraging the characteristic function and Fourier expansion. The choice of parameters, such as the number of terms and the domain, impacts the accuracy and convergence of the method. Additionally, Fourier-based methods can be extended to various financial problems beyond option pricing.

Overall, the lecture provides a comprehensive overview of Fourier transformation techniques in option pricing, covering topics such as the recovery of density, interpolation, the cos method, log-normal distributions, multiple strikes, implementation considerations, and comparisons with other pricing methods. The lecturer's explanations and code examples help illustrate the practical application of these techniques in finance and highlight their benefits in terms of accuracy and efficiency.

  • 00:00:00 In this section, we learn about Fourier Transformation for option pricing. The technique of Fourier Transformation is used to compute the density and efficiently price options for models that belong to the class of a fine diffusion model. The technique involves computing an integral over the real axis, which can be computationally expensive. However, by using the inversion lemma, we can reduce the domain for u and compute the real part of the integral, which helps in moving away from expensive computations. The block includes a discussion of the improvement of this representation using fast Fourier transformation, making the implementation much faster and efficient. Finally, the session concludes with a comparison of the Fourier transformation method and the cost method, along with the implementation details of these techniques.

  • 00:05:00 In this section, the lecturer discusses the first step in deriving a fast way of calculating density for using the fast Fourier transformation for option pricing. The first step involves dividing the domain into two and taking the real part, which is a cheap operation. Additionally, the lecturer discusses dividing complex numbers and taking the conjugate, which allows for a more efficient calculation of the characteristic function. The lecture also covers constructing a grid to get the density for every x, which involves choosing a certain domain and defining boundaries.

  • 00:10:00 In this section of the lecture, the professor explains how to calculate the density of x using a Fourier transformation integral and a grid of n number of grid points. They clarify that the density calculation needs to be done for multiple x's at the same time. Once the grids are defined, they define a new integral from 0 to infinity of a function named gamma and determine the trapezoidal integration from the discrete integral. The professor gives an example to explain how to perform trapezoidal integration for a function with an equally spaced grid.

  • 00:15:00 In this section of the lecture, the speaker discusses the process of configuring parameters in order to define the grid for Fourier transformation. These parameters include the number of grid points, the maximum value of u, and a relation between delta x and delta u. Once these parameters are defined, integrals and summations can be substituted and a function can be obtained for each x value. The speaker provides an equation that includes a trapezoidal integration and character functions evaluated at boundary nodes of the trapezoid.

  • 00:20:00 In this section of the lecture, the speaker discusses the representation of the integral and the importance of using the fast Fourier transformation (FFT) in option pricing. The speaker explains that by defining a function that fits the inputs for FFT, we can benefit from the fast evaluation and implementation of FFT already available in most libraries. The speaker then goes on to explain the steps involved in computing this transformation and how it can be used to calculate integrals. Overall, the lecture highlights the relevance of FFT in computational finance and its usefulness for option pricing.

  • 00:25:00 In this section, the lecturer discusses the Fourier transformation for option pricing. They start by defining the characteristic function and the grid we would use for the Fourier transformation. The lecturer notes the need for interpolation, as we have a discrete number of points, for example, a few thousand points, but millions of points are required for a smooth operation. They note that the trapezoidal integration of the characteristic function helps to recover the density, but it is still not beneficial. The lecturer explains that it is possible to reduce the number of evaluations and operations required for the discretized Fourier transformation by using the fast Fourier transformation. They show a graph that compares the reduction in operations when the dimensionality of the grid points increases, where the complexity achieved with the fast Fourier transformation is significantly better.

  • 00:30:00 In this section, the lecturer explains the Fourier Transformation and its use in option pricing. They focus on one term and define the corrective function of density calculated from the connective function. By using fast Fourier transformation, the lecturer emphasizes that the greatest advantage is that the terms on either side of the diagonal in the matrix m are actually the same terms, and this fact can be used to reduce the number of operations needed for computation. Additionally, the lecture goes into the properties of the symmetry and similarity between the terms in the counter on the opposite side of the diagonal. The lecture provides a detailed explanation of the correction term that is essential for representing the problem in zk.

  • 00:35:00 In this section, the instructor discusses the application of Fast Fourier Transformation (FFT) in computational finance. The FFT algorithm helps reduce the number of computations necessary by utilizing the similarity properties of terms in the metrics. However, to use FFT, the formulation needs to be in a special form that the algorithm can digest. The instructor emphasizes that different numerical integration techniques can be used to recover density, but the formulation needs to be such that FFT can be applied. Finally, the instructor provides an experiment showing the coding of FFT for a Gaussian distribution and how different parameters impact the recovery of density.

  • 00:40:00 In this section, the lecturer discusses the details regarding the recovery density function in Fourier Transformation for Option Pricing. The number of points used in the transformation is n, which must be large enough to achieve high accuracy density. The lecturer defines i as a complex number used to define the domain and maximum, with u max being determined by the distribution. The lecturer goes on to explain how to handle interpolation, using a cubic interpolation at the grid x i on f x i points. This interpolation is necessary to ensure that the output density function is calculated accurately even for inputs that are not in the grid.

  • 00:45:00 In this section of the video, the speaker discusses the benefits of interpolation and how it relates to option pricing using Fourier transformation. The speaker mentions that while Fourier transformation is beneficial for large boxes, interpolation may be preferred for larger numbers as it is comparatively cheaper than FFT. The speaker also demonstrates how interpolation works via code and explains that by changing parameters, it is possible to calculate sensitivities and obtain greeks at no additional cost, making the cosinus expansion technique ideal for pricing more exotic derivatives such as barrier and bermuda options.

  • 00:50:00 In this section, the lecturer discusses the relationship between the Taylor series and the characteristic function used in computational finance. The series has a one-to-one correspondence with the characteristic function, allowing direct relations without additional integrals. The lecturer then goes on to describe the cos method for option pricing, which uses a Fourier cosine expansion to represent even functions around zero. The method involves calculating integrals and coefficients, and it is important to keep in mind that the first term of the expansion should always be multiplied by half.

  • 00:55:00 In this section, the speaker discusses the need to change the domain of integration for function g in order to have a finite support range from a to b. They explain the importance of the Euler formula in simplifying the expression and show how substituting u with k pi divided by b a leads to a simpler expression involving the density. The truncated domain is denoted by a hat, and specific values for parameters a and b are chosen depending on the problem being solved. The speaker emphasizes that this is an approximation technique and that there are heuristic choices involved in selecting the values of a and b.

  • 01:00:00 In this section, the lecture explores the relationship between Fourier expansion and the recovery of density. By taking the real parts of both sides of the equation, the lecture shows that we have an Euler formula that allows us to express the integral of the density as a real part of the characteristic function. This is a very elegant and fast way of finding the relationship between integrals of the target function and the characteristic function using the definition of the currency function. The cost method is about finding these beautiful relations between integrals of the target function and the characteristic function to calculate expansion coefficients and the recovery of density. The method introduces errors that come from infinite summation and the truncation domain, but these errors are easy to control.

  • 01:05:00 In this section of the lecture on Fourier transformation for option pricing, the focus is on the summary of the Fourier cosine expansion. The expansion can achieve high accuracy even for a few terms present, as shown in a numerical experiment involving a normal PDF, where the error generation is checked based on the number of terms, and time is measured. The code experiment is structured to generate density using the cosine method and defining error as the maximum absolute difference of density, which is recovered using the cosine method and compared with the exact normal PDF. The cosine method only requires a few lines of code to recover density using the characteristic function, which is the heart of the method.

  • 01:10:00 In this section, the speaker discusses the numerical results of the Fourier expansion, which can be performed efficiently with matrix notation. The error decreases as the number of expansion terms increase, with an error of 10^-17 achieved with 64 terms. A smaller number of terms can result in oscillations or a poorer fit. The speaker notes that parameters such as the domain and number of expansion terms should be tuned, especially for heavily tailed distributions. The log normal density can also be modeled using the normal characteristic function.

  • 01:15:00 In this section, the lecturer discusses the log-normal case and how its density differs from the normal distribution. Due to the log-normal distribution, a higher number of expansion terms is needed. The lecturer encourages keeping the number of terms for a specific type of distribution and domain. The cost method is powerful for recovering density and is mainly used for derivative pricing, such as European type options that only have a payment at maturity. The lecturer explains how pricing works, which involves integrating the product of a density and payoff function under the risk-neutral measure.

  • 01:20:00 In this section, the video discusses more exotic options, in which a connectivity function can be derived and cosmetics can be used. The term distributions are transition densities, meaning that when calculating the transition density from one point on the time axis to another, the initial value is given in terms of the distribution of a random variable. The presentation then goes on to discuss truncation of density, where the density is truncated at a specified interval, and the Gaussian quadrature method, which involves integrating a summation of real parts of a characteristic function times some exponent. The adjusted log asset price is defined as the logarithm of the stock at maturity divided by a scaling coefficient, and an alternative representation of the payoff is presented. The video notes that the choice of v has a direct impact on the coefficient hn and that this approach can be used for evaluating payoffs for multiple strikes.

  • 01:25:00 In this section, the speaker discusses the process of computing the integral over a payoff function multiplied by the density through the use of exponential and cosine functions in Fourier transformation for option pricing. The speaker goes on to explain a generic form for two integrals involved and how selecting different coefficients allows for various payoffs to be calculated. The speaker emphasizes the importance of being able to implement this technique for multiple strikes, allowing for pricing of all strikes at once, thus saving time and reducing expenses. Finally, the speaker explains the pricing representation in the form of a matrix multiplied by a vector.

  • 01:30:00 In this section of the lecture, the implementation formula for Fourier transformation for option pricing is discussed. It involves vectorizing elements and matrix manipulations. The implementation involves taking k as a vector and creating a matrix with nk strikes. The formula involves calculating real parts to handle the complex numbers. The characteristic function is of high importance as it does not depend on x, and it plays a key role in achieving efficient implementations for multiple strikes. The accuracy and convergence of the implementation depend on the number of terms, and a sample comparison is shown.

  • 01:35:00 In this section, the speaker discusses the code used for the Fourier Transformation method for option pricing and explains the different variables involved. They introduce the concept of a range for coefficients a and b and explain how it is typically kept at 10 or 8 for jump diffusion models. The code also includes a lambda expression for the characteristic function, which is a generic function that can work for different models. The speaker emphasizes the importance of measuring time by conducting multiple iterations of the same experiment and taking the average time for all of them. Finally, they illustrate the cost method and how it uses the integration range to assume a large volatility.

  • 01:40:00 In this section, the speaker explains the process of defining strikes and calculating coefficients for the Fourier transform method of option pricing. The speaker notes that while tuning the model parameters can lead to better convergence and fewer terms needed for evaluation, it is generally safe to stick with standard model parameters. The speaker then details the steps of defining a matrix and performing matrix multiplication to obtain the discounted strike price, with the resulting error being compared against that of the black-sholes method. Additionally, the speaker demonstrates how introducing additional strikes can lead to a smoother function and make the calibration of the model to multiple strikes easier.
Computational Finance: Lecture 8/14 (Fourier Transformation for Option Pricing)
Computational Finance: Lecture 8/14 (Fourier Transformation for Option Pricing)
  • 2021.04.09
  • www.youtube.com
Computational Finance Lecture 8- Fourier Transformation for Option Pricing▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬This course is based on the book:"Mathematic...
Reason: