Quantitative trading - page 20

 

What are the challenges of discretizing the CIR process using the Euler method?



What are the challenges of discretizing the CIR process using the Euler method?

Welcome to the series of questions and answers based on the course of Computational Finance. Today, we have Question 22, which is based on Lecture 10. The question pertains to the challenges of discretizing the Cox Ingersoll Ross (CIR) process using Euler's method.

The CIR process is a popular stochastic process, particularly used in the dynamics of the Heston model. It is a non-negative process with a mean-reverting behavior. The variance in the CIR process can fluctuate around a long-term mean, exhibiting volatility. Notably, the solution of this process follows a non-central chi-square distribution, which has fatter tails compared to commonly known distributions such as the normal or log-normal.

One important characteristic of the CIR process is the so-called "failure condition." This condition states that if two times the mean-reversion parameter multiplied by the long-term mean is greater than the squared volatility parameter, the paths or distribution of the process will stay away from zero. If this condition is not satisfied, there will be an accumulation of probability mass around zero, leading to a higher likelihood of paths approaching zero.

In terms of simulation, this accumulation around zero and increased likelihood of extreme events pose challenges. Although the failure condition is rarely satisfied when calibrating the Heston model to market data, it becomes crucial when simulating the model. Inaccurate discretization can result in inconsistencies between the Monte Carlo simulation and Fourier inversion, leading to unreliable pricing of market instruments.

The Euler discretization, as discussed in Lecture 10, relies on iterative steps where each step depends on the previous one. It involves a constant parameter, a time increment (DT), the volatility (gamma), the square of the previous realization, and a Brownian motion component. However, with Euler discretization, there is a possibility that the variance can become negative due to the involvement of normally distributed random variables (Z).

The probability of the variance becoming negative under Euler discretization can be derived. This probability depends on the normal distribution of Z and the inequality between the right-hand side and left-hand side of the derived expression. As the failure condition becomes less satisfied, the probability of negative realizations increases. Negative variances can lead to simulation explosions and produce incorrect results if not properly handled.

It is essential to address the challenges of Euler discretization for the CIR process to ensure accurate simulation results. In practice, the failure condition must be considered, even if it is often not satisfied when calibrating models to market data. Inconsistent pricing results can be a red flag, highlighting the need for accurate discretization methods in computational finance.

I hope this explanation clarifies the challenges associated with discretizing the CIR process using Euler's method. If you have any further questions, feel free to ask.

What are the challenges of discretizing the CIR process using the Euler method?
What are the challenges of discretizing the CIR process using the Euler method?
  • 2023.03.16
  • www.youtube.com
Computational Finance Q&A, Volume 1, Question 22/30▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬Materials discussed in this video are based on:1) FREE online cours...
 

Why do we need Monte Carlo if we have FFT methods for pricing?


>

Why do we need Monte Carlo if we have FFT methods for pricing?

Welcome to the question and answer session based on the lecture series on Computational Finance. Today we have question number 23, which is related to the materials covered in lecture number 10. The question is: Why do we need Monte Carlo if we have fast Fourier transformation methods for pricing? This question challenges us to consider the practicality of different pricing techniques and why Monte Carlo methods are still relevant despite not being the fastest.

In practice, both approaches are needed. We require very fast methods for pricing European options, which can be efficiently priced using methods like the COS method or fast Fourier transformation. However, when it comes to pricing exotic derivatives, we often need more flexible methods, even if they are not the fastest. Exotic derivatives can have complex structures and features that cannot be easily handled by fast Fourier transformation. Additionally, the need for extremely fast pricing is not always crucial for exotic derivatives.

When pricing exotic derivatives, we typically start by calibrating a pricing model using simpler instruments such as European options. Since exotic derivatives are less liquid, it is challenging to find market prices for similar exotic derivatives for calibration purposes. However, European options are more readily available, and their prices can be used to calibrate the model. This approach allows us to extrapolate the calibrated model parameters to price exotic derivatives. It's important to note that this strategy may not always work well, especially with local volatility models, as it can lead to mispricing. However, in this course, we focus primarily on log-normal stochastic volatility models, which are less sensitive to this issue.

Let's summarize a few key points. Monte Carlo methods are mainly used for pricing exotic callable derivatives, while fast Fourier methods offer speed advantages for pricing European options. The reason European options receive a lot of attention is that their pricing serves as a building block for calibrating models and pricing more complex derivatives. Efficient pricing of European options is crucial for model calibration, as it allows us to match the model prices with market data. If a model cannot price European options efficiently, it will likely be impractical for real-world use. An example is the Heston model with time-dependent parameters, where numerical evaluation of the characteristic function can be very slow, making calibration challenging. However, if we assume time-dependent but piecewise constant parameters, we can still find an efficient characteristic function, albeit with reduced flexibility.

Pricing speed is crucial, particularly during the calibration phase, which involves numerous iterations. The optimizer tries various combinations of model parameters to find the best fit to market data, requiring thousands or even hundreds of thousands of evaluations. Therefore, every millisecond saved is essential. It's worth mentioning that although fast Fourier transformation can provide efficient pricing for certain exotic derivatives like Bermudas, it is not a generic solution. Adding additional features or parameters may require a significant modification of the method. In contrast, Monte Carlo methods inherently provide flexibility, making them suitable for pricing a wide range of exotic derivatives. In practice, fast Fourier transformations are often used for calibration, while Monte Carlo methods are used for pricing exotic derivatives.

Alternatively, we could consider PD (partial differential equation) methods, which lie between fast Fourier transformation and Monte Carlo. PD methods can price callable products efficiently, but they have less flexibility in terms of payoff specification, requiring re-specification for each scenario.

I hope this explanation clarifies the importance of both Monte Carlo and fast Fourier transformation methods in computational finance. See you next time! Goodbye!

Why do we need Monte Carlo if we have FFT methods for pricing?
Why do we need Monte Carlo if we have FFT methods for pricing?
  • 2023.03.23
  • www.youtube.com
Computational Finance Q&A, Volume 1, Question 23/30▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬Materials discussed in this video are based on:1) FREE online cours...
 

How to hedge Jumps?



How to hedge Jumps?

Welcome to today's question and answer session based on the computational finance course. In this session, we will be discussing question number 24, which is related to the materials covered in lecture number 11. The focus of today's question is on hedging jumps.

During lecture number 11, we delved deeply into the aspects of hedging, specifically addressing how to hedge different types of financial instruments. I provided illustrations of a simulation that involved simulating a stock using both Brownian motion and geometric Brownian motion, as well as processes with jumps. We explored how to develop a hedging strategy and examined the impact of these hedges on the profit and loss (P&L) of a portfolio.

Hedging, at its core, is about minimizing risks. From the perspective of a financial institution, when selling options or other derivatives, the goal is to establish a hedge, which involves offsetting trades. The purpose of this hedge is to ensure that the institution remains unaffected by market fluctuations. Essentially, the institution aims to be immune to the market's ups and downs, while benefiting from the additional premium received on top of the fair value of the derivative pricing.

The question at hand is: How does the hedging process work when dealing with diffusive processes, and what happens when the underlying asset exhibits jumps? This question addresses a challenging aspect of hedging, which requires us to consider models with stochastic volatility, such as the Heston model.

During the lecture, I presented code and demonstrated the hedging strategy. One crucial takeaway is the concept of Delta. Delta represents the sensitivity of the option price to changes in the underlying asset price. In the case of a stock finishing in the money, Delta approaches one, indicating a higher correlation between the option price and the stock price. Conversely, if the stock finishes below the strike price, Delta approaches zero.

In the context of a Black-Scholes case, we assume continuous rehedging or rebalancing of our portfolio each day. This means that, depending on market fluctuations, we adjust our hedging portfolio daily. The goal is for the combined value of our hedging portfolio and the derivative to be zero at the option's expiration. The quality of our hedge depends on the frequency of our rebalancing. In the Black-Scholes case, where we assume infinitely many rebalancing steps, the distribution of the P&L narrows, approaching an ideal scenario of zero fluctuations.

However, when dealing with jumps, the impact on hedging becomes more challenging. Even with increased frequency of rebalancing, the distribution of P&L widens. This means that the risk associated with jumps requires a different treatment. One possible approach is to follow the hedging strategy used in models with stochastic volatility, such as the Heston model. In these models, the portfolio replicating the option involves additional terms that help hedge risks associated with stochastic volatility. Specifically, these additional terms involve buying or selling options with different strikes to offset the risk. It is essential to consider the liquidity of the options involved to optimize the hedging strategy.

In the case of jumps, further research suggests that in order to achieve a good hedge, one may need to include approximately seven additional options with different strikes. This additional complexity highlights the importance of understanding the strategy of hedging models with stochastic volatility when addressing jump risks.

To summarize, hedging jumps poses challenges that require a thoughtful approach. By incorporating strategies from hedging models with stochastic volatility, it is possible to mitigate the impact of jumps on hedging strategies. The inclusion of additional options with different strikes can further enhance the effectiveness of the hedge. Remember, while this discussion provides valuable insights, it is important to consider the specific dynamics and risks associated with the derivatives and counterparties involved.

How to hedge Jumps?
How to hedge Jumps?
  • 2023.03.26
  • www.youtube.com
Computational Finance Q&A, Volume 1, Question 24/30▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬Materials discussed in this video are based on:1) FREE online cours...
 

What is pathwise sensitivity?



What is pathwise sensitivity?

Welcome to today's question and answer session on the topic of computational finance. In today's session, we will discuss question number 25, which pertains to the concept of pathwise sensitivity. Sensitivity calculations play a crucial role in portfolio hedging, as they help reduce risks and make the portfolio less susceptible to market fluctuations.

When selling derivatives, it is desirable to establish a hedging portfolio that remains unaffected by market movements. This means that the overall risk associated with the derivative and the hedging portfolio combined should be immune to market fluctuations. Achieving this perfect hedge allows us to maintain the premium received when initially selling the derivative. In lecture number 11, we covered the details of hedging strategies and discussed the importance of accurately calculating sensitivities.

A common approach to calculating sensitivities, such as the sensitivity with respect to a parameter like volatility, is to use finite difference approximations. This involves calculating the derivative of the derivative value with respect to the parameter using a small increment (Delta hat). However, this approach has limitations. Firstly, it requires calculating the derivative value twice, which can be computationally expensive, especially when dealing with a large number of parameters. Secondly, the accuracy of the approximation can be sensitive to the choice of Delta hat, leading to potentially significant errors.

Pathwise sensitivity offers a more accurate alternative for calculating sensitivities. It involves exchanging the order of differentiation and integration to simplify the expression. By leveraging analytical calculations for certain elements of the expression, we can improve convergence and accuracy compared to finite difference approximations. This approach is particularly beneficial when the payoff of the derivative does not depend on the parameter being differentiated. In such cases, the sensitivity can be calculated explicitly without the need for additional approximations.

For example, when considering the sensitivity of a call option with respect to the stock price (Delta), the pathwise sensitivity method allows us to calculate the expectation of the stock given that it is greater than the strike price. Similarly, for the sensitivity with respect to volatility (Vega), the method simplifies the calculation by using the same common factor and evaluating the expectation using Monte Carlo paths of the stock.

Applying the pathwise sensitivity method can lead to improved convergence and accuracy while reducing the number of Monte Carlo paths required for calculations. It also eliminates the need for evaluating the derivative value multiple times, resulting in computational efficiency.

It is worth noting that while the pathwise sensitivity method works well in models like Black-Scholes, where analytical solutions for Greeks exist, it can also be applied to more complex models like the Heston model. Analytical expressions for certain derivatives can still be obtained, enabling accurate sensitivity calculations.

For more details and numerical requirements, I recommend revisiting lecture number 11 and referring to the book and lecture materials, which provide a comparison between pathwise sensitivity and finite difference methods. The results demonstrate the superior convergence and accuracy achieved by pathwise sensitivity, allowing for high-quality results with fewer Monte Carlo paths.

If you have further questions, please feel free to ask, and I'll be happy to provide additional insights.

What is pathwise sensitivity?
What is pathwise sensitivity?
  • 2023.03.30
  • www.youtube.com
Computational Finance Q&A, Volume 1, Question 25/30▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬Materials discussed in this video are based on:1) FREE online cours...
 

What is the Bates model, and how can it be used for pricing?


What is the Bates model, and how can it be used for pricing?

Welcome to this series of questions and answers based on the course of Computational Finance. Today, we have question number 26 out of 30, which is based on lecture number 12.

The question is as follows: "What is the Bytes model, and how can it be used for pricing?"

The Bates model is an extension of the stochastic volatility model of Heston. To understand the Bates model, let's start by looking at the Heston model without considering the terms involving volatility and the one boxed here. In its basic form, the Heston model consists of two elements: a part related to the Poisson process and a drift correction known as the Martingale correction.

The Poisson process and its drift correction are essential components in the Heston model. The drift correction is associated with this part and acts as a Martingale correction. The derivations for this correction can be found in the lecture notes.

Now, let's focus on the Bates model itself. The Bates model incorporates an additional jump component, which is independent of the Brownian motion. These jumps are represented by a normally distributed variable, J, with a mean (μJ) and variance (σJ^2). The magnitude of the jump is expressed by the exponential of J, where the negative sign indicates the downward movement. The jump component is driven by a Poisson process, which determines whether a jump occurs or not.

One important characteristic of the Bates model is that the jump add-on is uncorrelated with the Brownian motion, making it an independent component. The reason for this independence lies in the characteristic function of the Bates model. By examining the characteristic function, we can observe that it is a product of the Heston model and the jump component. If we were to correlate the two, it would significantly complicate the derivation of the characteristic function.

The motivation behind introducing the Bates model is to enhance the flexibility of the Heston model in calibrating to market data. Researchers discovered that the Heston model struggles to accurately calibrate options with extremely short maturities, such as options expiring within a week or a month. The model's lack of flexibility in generating the observed market skew prompted the addition of jumps. By incorporating jumps, the Bates model can introduce more skew to match the market data.

It is important to note that the jumps in the Bates model are initially very active and add a significant amount of skew to the model. However, over time, they diffuse, and the model converges to the Heston model. This convergence can be easily observed in lecture number 12 and the corresponding book.

Furthermore, the Bates model allows for different distributions for the jump generator, J, instead of assuming it to be normally distributed, as done in the standard Bates model. Varying the distribution can have an impact on the resulting skew, offering flexibility in modeling different market scenarios. However, it is also recognized that even with the jumps provided by the Bates model, the skew may still be insufficient for extreme market scenarios.

Now, let's discuss the impact of the Bates model on implied volatilities. The model introduces three additional parameters: the intensity (λ) for the Poisson process, the mean (μJ) for the normally distributed jump, and the standard deviation (σJ) of the jump. Increasing the intensity or the standard deviation primarily increases the level and curvature of the implied volatilities, respectively. However, it is the mean of the jump (μJ) that significantly affects the skew. Negative and strongly negative values of μJ add a substantial amount of skew to the model.

The mean of the jump (μJ) is a crucial parameter in the Bates model. It is worth noting that in the Heston model, this parameter also controls

the skew when correlation is absent. Introducing negative correlation between the asset and the variance process in the Heston model can help enhance the skew. However, if further skew is desired, jumps are added to the model. It is essential to consider the calibration objectives, particularly when dealing with short maturity options or exotic derivatives dependent on future realizations. In such cases, the benefits of calibration for log maturities may be limited, and the additional parameters introduced by jumps can pose challenges.

In summary, the Bates model extends the Heston model by incorporating jumps, providing more flexibility in calibrating to market data, especially for options with short maturities. By introducing jumps, the model can enhance the skew and better match the observed market conditions. The mean of the jump (μJ) is a key parameter in controlling the skew. However, it is important to evaluate the trade-offs and consider the objectives of pricing when deciding whether to use the Bates model or the Heston model. For further details and in-depth analysis, I recommend revisiting lecture number 12.

What is the Bates model, and how can it be used for pricing?
What is the Bates model, and how can it be used for pricing?
  • 2023.04.03
  • www.youtube.com
Computational Finance Q&A, Volume 1, Question 26/30▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬Materials discussed in this video are based on:1) FREE online cours...
 

What is the relation between European and Forward-start options?



What is the relation between European and Forward-start options?

Welcome to this series of questions and answers based on the course of Computational Finance. Today, we have question number 27, which is based on materials discussed in lecture number 12. The question is as follows:

"What is the relation between European options and forward start options?"

Forward start options are a type of non-standard derivative also known as performance options. They differ from European options in terms of their start and expiry dates. In a forward start option, the contract starts in the future and the expiry date is even further in the future.

To understand the relation between European options and forward start options, let's consider the following scenario. Suppose we have three time points: t0, t1, and t2. In a European option, we would calculate the discounted expected future payoff at time t2 based on the stock's distribution at that time. This means we price the option with a starting date of t0 and evaluate the payoff at t2.

In contrast, forward start options start at t1, which means they begin at an uncertain point in the future when the stock's value is unknown. These options focus on the performance of the stock over a specific period of time. The performance is typically measured as the ratio of the stock's value at t2 minus its value at t1, divided by its value at t1.

Forward start options are particularly useful for investors who are interested in the performance of a stock over a specific time period, rather than its absolute level. These options allow investors to participate in the upside potential of a stock's performance during the chosen interval.

Forward start options serve as building blocks for more exotic derivatives, such as click options, where performance analysis is an essential component. By considering performances over multiple intervals, these options can be structured to lock in profits at each point while protecting against downside potential. The investor receives the maximum of the performances or a predetermined payout, creating a risk-averse option with reduced investment cost compared to traditional European options.

Mathematically, forward start options involve two important dates: the future date (T1) when the option settles and the expiry date (T2). The payoff for a European forward start option can be represented as the maximum of the performance ratio (S(T2)/S(T1) - 1) minus the strike price (K) or zero.

The key characteristic of forward start options is that their value does not depend on the initial stock value (S(t0)). Instead, it is determined by the stock's performance in the future. This property makes them appealing for investors interested in the performance of a stock over a specific time period.

To price a forward start option, we consider the discounted expected future payoff at the expiry date (T2) using appropriate pricing methods. The value of the forward start option is not influenced by the current stock price, but rather by the performance of the stock over the specified time interval.

In summary, forward start options are a type of non-standard derivative that allow investors to focus on the performance of a stock over a particular time period. They provide a risk-averse alternative to European options, allowing for reduced investment costs while still offering exposure to specific assets. The value of a forward start option does not depend on the initial stock value, emphasizing the importance of the stock's performance in the future.

I hope this explanation clarifies the relation between European options and forward start options. If you have any further questions, feel free to ask. See you next time!

What is the relation between European and Forward-start options?
What is the relation between European and Forward-start options?
  • 2023.04.07
  • www.youtube.com
Computational Finance Q&A, Volume 1, Question 27/30▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬Materials discussed in this video are based on:1) FREE online cours...
 

What instruments to choose to calibrate your pricing model?



What instruments to choose to calibrate your pricing model?

Welcome to the Questions and Answers session on Computational Finance. Today's question is number 28 out of 30, and it pertains to choosing instruments for calibration in a pricing model.

In this pricing exercise, we have a system of stochastic differential equations that we want to utilize for pricing an exotic derivative. The question is, how do we calibrate the model and which instruments should we choose for this purpose to accurately price the exotic derivative?

The general principle is to use hedging instruments as calibration instruments. This means that if the market instruments, such as implied volatilities and yield curves, have an impact on the pricing of the exotic derivative, they should be incorporated into the calibration routine.

Let's consider a simplified example with a volatility surface. We have a matrix of implied volatilities corresponding to different strike prices and expiries. To determine the sensitivity of our exotic derivative to these market instruments, we can perform the following steps:

  1. Start with a set of market instruments and price the exotic derivative.
  2. Perturb or "shock" one of the market instruments, such as the implied volatility, by a small amount (epsilon).
  3. Recalculate the price of the exotic derivative using the new market data (the shocked instrument).
  4. If the difference between the two prices is zero, it implies that the exotic derivative is insensitive to that specific market instrument.
  5. Repeat this process for each market instrument to assess their impact on the exotic derivative (this is known as calculating the Vega array).
  6. If the price difference is non-zero, it indicates that the exotic derivative is sensitive to that market instrument. Such instruments should be included in the calibration process since they can be used for hedging purposes. Buying or selling options, particularly European options, related to the sensitive market instrument allows us to hedge against the associated risk.

To summarize the steps involved in pricing an exotic derivative:

  1. Start with a specific derivative product.
  2. Determine the appropriate stochastic differential equations that suit the pricing of the derivative, considering factors like smile, skew, or stochastic interest rates.
  3. Calibrate the model by selecting suitable instruments for calibration, typically European options for equity markets.
  4. Use mathematical techniques (e.g., partial differential equations, integral forms, Fourier expansions) to model the product price based on the chosen stochastic differential equations.
  5. Evaluate the exotic derivative using numerical methods, such as solving PDEs or employing Monte Carlo simulations.
  6. Manage the risk associated with the derivative by recalibrating the pricing model and adjusting the hedging coefficients.

In conclusion, always use hedging instruments of your exotic derivative as the calibration instruments. This approach ensures that the calibration process incorporates the market factors that significantly affect the pricing of the exotic derivative. Additionally, managing risk through hedging is crucial for maintaining control over the derivative's associated risks.

What instruments to choose to calibrate your pricing model?
What instruments to choose to calibrate your pricing model?
  • 2023.04.13
  • www.youtube.com
Computational Finance Q&A, Volume 1, Question 28/30▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬Materials discussed in this video are based on:1) FREE online cours...
 

How to calibrate a pricing model? How to choose the objective function?



How to calibrate a pricing model? How to choose the objective function?

Welcome to Questions and Answers, focused on computational finance. Today, we are on question number 29 out of 30, nearing the end of the first volume in this series. The question of the day is how to calibrate a pricing model and select the objective function.

Calibration in finance is often regarded as an art since there is no one-size-fits-all recipe that works for all pricing methods and models. Each calibration approach is unique and requires a deep understanding of the model at hand, as well as skill in achieving a good calibration. However, there are several principles and considerations to keep in mind when calibrating a model.

For instance, when dealing with a stochastic volatility model like Heston or others, which are commonly used to price exotic derivatives such as forward-start options or callable derivatives, it is crucial to choose instruments that are relevant to the derivative being priced. If a derivative expires in five years and its value depends on volatilities during this period, it would be pointless to calibrate the model to instruments that mature 30 or 40 years in the future. To identify relevant instruments, sensitivity analysis plays a vital role. By modifying the volatilities of market instruments one by one and observing the resulting changes in the derivative's value, one can determine the instruments to which the model is sensitive.

When calibrating a model for pricing exotics, particularly European options, it is essential to avoid calibrating to irrelevant instruments. Using all available instruments for calibration without considering their relevance can result in a loss of flexibility, especially when dealing with long-term options while the derivative remains in the short-term range. It is necessary to carefully select the instruments used for calibration and focus on those that align with the desired hedging objectives.

From a trader's perspective, it is crucial to calibrate the model to instruments that exist and can be bought or sold in the market. This ensures that the calibration is relevant and applicable in real trading scenarios. Therefore, the availability and liquidity of the instruments should be considered during the calibration process.

European options, specifically the most liquid ones, are often used for calibration when pricing exotic derivatives. This choice is driven by their liquidity and suitability for hedging purposes. However, in cases where more straightforward exotic derivatives are available and liquid in the market, those instruments may be preferred for offsetting the hedge.

In general, calibrating models for exotic derivatives can be complex. In such cases, a standard approach is to calibrate the model to European options and focus on achieving a good fit at the at-the-money point, as this is the most critical region. The at-the-money point represents the level where the market and model values must align closely, regardless of the presence of smiles or skews in other regions of the implied volatility surface. Putting extra weight on the at-the-money options during optimization helps ensure a good calibration in this critical region.

When defining the objective function for calibration, there are different approaches to consider. The standard approach involves using a weighted target function, as described in the book and covered in lecture number 13. This function involves summing over all relevant option expiries and strikes, applying weights (denoted as Omega) to each term, and calculating the squared difference between market prices and model prices. The objective is to find model parameters (Theta) that minimize this difference, thereby matching the option prices in the market.

The weight function (Omega) can be a tuning parameter and helps prioritize the at-the-money options during optimization. It is important to note that small differences in option prices can lead to significant differences in implied volatilities. Hence, a preferred approach is to calibrate based on implied volatilities, as they capture the market's volatility expectations more accurately.

However, calculating implied volatilities can be computationally expensive, especially when dealing with complex pricing models. In such cases, it is common to use option prices directly in the objective function.

The choice of weights in the objective function is subjective and depends on the specific requirements and objectives of the calibration. Typically, higher weights are assigned to at-the-money options to ensure a better fit in the critical region. The weights for out-of-the-money and in-the-money options can be adjusted based on their importance in the pricing model or the desired hedging strategy.

Another consideration when selecting the objective function is the choice of optimization algorithm. There are various optimization algorithms available, such as least squares, maximum likelihood estimation, and simulated annealing, among others. The selection of the algorithm depends on the complexity of the model, the computational resources available, and the desired characteristics of the calibration process, such as speed or accuracy.

It is worth mentioning that calibrating a pricing model is an iterative process. After the initial calibration, it is essential to perform a thorough analysis of the results and assess the quality of the fit. This analysis may involve examining the residual errors, implied volatility smile/skew patterns, and other diagnostics. If the calibration does not meet the desired criteria, further adjustments and iterations are necessary.

Additionally, when calibrating a model, it is essential to consider the robustness of the calibration results. Robustness refers to the stability of the calibrated parameters across different market conditions. It is crucial to verify whether the calibrated parameters produce consistent and reasonable results for a range of market scenarios and instruments.

In summary, when calibrating a pricing model for exotic derivatives, it is important to:

  1. Select relevant market instruments based on sensitivity analysis.
  2. Consider the liquidity and availability of instruments.
  3. Focus on achieving a good fit at the at-the-money point.
  4. Define an objective function that minimizes the difference between market prices and model prices, either in terms of option prices or implied volatilities.
  5. Assign appropriate weights to different options, prioritizing the at-the-money region.
  6. Choose an optimization algorithm suitable for the model complexity and computational resources.
  7. Perform a thorough analysis of the calibration results and assess the quality of the fit.
  8. Consider the robustness of the calibrated parameters across different market conditions.

These principles provide a foundation for calibrating pricing models for exotic derivatives, but it is important to remember that the calibration process is highly dependent on the specific model and market context.

How to calibrate a pricing model? How to choose the objective function?
How to calibrate a pricing model? How to choose the objective function?
  • 2023.04.24
  • www.youtube.com
Computational Finance Q&A, Volume 1, Question 29/30▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬Materials discussed in this video are based on:1) FREE online cours...
 

What are the Chooser options?



What are the Chooser options?

Welcome to the final question of this series based on the materials discussed in lecture number 13 of the Computational Finance course. In this question, we will explore Chooser options and their significance in financial engineering.

A Chooser option is a type of exotic derivative that provides the holder with the flexibility to choose between a call option and a put option at a predetermined future time. It allows the investor to delay the decision of whether to buy a call or put option until a specified date, known as time t0, which is in the future. This additional time before making the choice adds value and flexibility to the option.

To understand Chooser options better, let's recap some other types of exotic derivatives briefly discussed in the lecture. Firstly, we have the binary option, also known as a cash-or-nothing option. Binary options have different variations, but they typically involve indicator functions based on the stock price at maturity. If the stock price exceeds a predetermined strike price (K) at expiry, the option pays out a fixed amount (Q). The expectation of the indicator function is equivalent to the probability of the stock price exceeding the strike price at maturity.

Next, we have compound options, which are options on options. A compound option provides the holder with the right to enter into another option at a future time. In the case of a compound call option, the holder has the opportunity to purchase a call option on an underlying asset within a specified period (from time t0 to the capital time T). The inner option represents the call option during this period, while the outer option covers the entire interval. Compound options introduce additional layers of optionality and are commonly used in complex financial scenarios.

Now, let's delve into the Chooser option. Similar to compound options, a Chooser option has two distinct time periods. At time t0 (which is in the future), the investor has the ability to decide whether to buy a call option or a put option. The decision is based on the anticipated behavior of the underlying stock. If the stock is expected to perform well, the call option will likely be more valuable. Conversely, if the stock is expected to decline, the put option may be more attractive. The value of the Chooser option lies in the flexibility to choose between these two options at a later date.

It is important to note that time t0 in the Chooser option is a future time, not the present day, to allow for meaningful decision-making. If t0 were set to the present, the Chooser option would become a trivial exercise. The Chooser option provides an opportunity to enter into a contract over the future period, and it can also be traded on the market if the underlying stock has gained significant value by that time.

Choosers options can be seen as a type of real option, where options on options are utilized in financial derivatives. They offer investors increased flexibility and adaptability to market conditions, making them suitable for various investment strategies and risk management purposes.

In conclusion, a Chooser option is an exotic derivative that grants the investor the choice between a call option and a put option at a predetermined future time (t0). This flexibility adds value and allows the investor to adjust their investment strategy based on market expectations. The presence of the additional time period (t0) distinguishes the Chooser option from other types of options. Compound options, including options on options, are closely related to Chooser options and are frequently used in real options and complex financial scenarios.

What are the Chooser options?
What are the Chooser options?
  • 2023.05.01
  • www.youtube.com
Computational Finance Q&A, Volume 1, Question 30/30▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬▬Materials discussed in this video are based on:1) FREE online cours...
 

Introduction to Medium-Frequency Trading: Trading in Milliseconds



Introduction to Medium-Frequency Trading: Trading in Milliseconds

Dr. Ernest Chan, a prominent figure in quantitative trading, sheds light on the significance of medium-frequency trading (MFT) and its role in understanding the flash crash of 2010. According to Dr. Chan, MFT is a critical aspect of trading that all traders should be aware of, emphasizing the importance of selecting the right trading venues to submit orders. He highlights the need for traders to familiarize themselves with complex order types such as ioc and ISO orders, as well as understand the functioning of dark pools. Traders should actively inquire about their brokers' order routing practices and assess whether it aligns with their best interests.

To clarify, Dr. Chan defines MFT as trading with a latency of one to 20 milliseconds, suggesting that all traders engaging in intraday trading fall under this category. Thus, it becomes essential for traders to grasp the nuances of special order types, optimize their order execution strategies, and minimize the impact of their orders to avoid potential profit loss. MFT operates within the realm of intraday trading, where traders must navigate the challenges posed by high-frequency traders and the resulting thin book liquidity. Notably, the U.S. Equity Market has witnessed a surge in HFT activities since 2010, requiring traders to comprehend market microstructure and its impact on their trading profits.

The complexities of trading in the highly liquid U.S. Equity Market are further explored by Dr. Chan. Various order types and routing methods can significantly influence a trader's profitability. Furthermore, the execution of certain orders can inadvertently reveal one's intentions to others, leading to information leakage. Dr. Chan highlights additional challenges faced by traders, including flash crashes, liquidity withdrawals, and illegal market manipulation. To illustrate the impact of HFT activity on liquidity, he presents a startling example using a screenshot from Interactive Brokers. Even highly liquid stocks like Apple show a mere 100 shares of top-of-the-market liquidity during the trading day due to market makers' efforts to avoid exploitation by HFTs, resulting in decreased overall liquidity.

The interplay between HFT, market makers, and market liquidity is discussed in detail. Dr. Chan explains that market makers, due to gaming by HFTs, refrain from placing large orders at the top of the order book, fearing rapid execution that could lead to financial losses. Additionally, a significant portion of liquidity remains hidden in dark pools, making it challenging to assess whether sufficient liquidity exists to execute trading strategies effectively. Dr. Chan points out that approximately one-third of U.S. shares are traded in dark pools, further complicating liquidity evaluation for traders. The discussion touches upon the role of the ISO order type in flash crashes, where an order can stay on one venue while sweeping the other order book. Market makers, upon detecting toxicity in the order flow, can cause prices to plummet dramatically.

The video also touches on various trading practices and industry issues, including a case involving a UK retail trader convicted for illegal trading and the concept of spoofing, which can lead to stock market crashes. The speaker delves into the flaws and potential manipulation associated with dark pools. Furthermore, the importance of physical infrastructure, such as co-location, direct agency access, and high-performance trading platforms, is emphasized to reduce latency and optimize high-frequency trading.

In a separate segment, the speaker emphasizes the significance of order flow in trading. Each trade carries a direction, indicating whether it is a buy or sell order. This directional information can serve as a valuable trading signal. Dr. Chan clarifies that MFT is not limited to high-frequency traders or specific markets—it is relevant to all traders, as it can prevent losses and present opportunities during flash crashes. The section concludes with an announcement about an upcoming course on trading in milliseconds.

The video moves on to discuss a new course on algorithmic trading strategies, which is introduced with a generous 75% discount coupon code provided to viewers. The course is part of the Phi course learning track, offering an additional 15% discount for interested participants. The speaker then transitions into a Q&A session, where Dr. Chan addresses various queries from the audience.

One question pertains to the requirement for brokers to route orders to the National Best Bid and Offer (NBBO) or directly to the exchange. Dr. Chan explains that dark pools are accessible to anyone, and traders can request their brokers to direct orders to specific dark pools. He further clarifies that co-locating at a data center, which allows for reduced latency, is not as expensive as commonly believed, making it feasible for retail traders to take advantage of low latency trading.

Dr. Chan delves into the impact of machine learning on MFT, stating that while it can be useful in processing data for high-level strategy development, it may not provide significant benefits for execution strategies. He distinguishes between spoofing, which involves manipulating orders, and order flow, which focuses solely on executed trades and their corresponding buy or sell directives.

The discussion touches upon the measurement of order flow as an indicator and the creation of dark pools. Dr. Chan suggests that the easiest way to measure order flow is by accessing data that includes the aggressive flag for each trade. Additionally, he explains that dark pools are typically established by large brokerages and market makers.

The Q&A session continues with Dr. Chan answering various audience questions. He provides insights on identifying fake or unintended limit orders while analyzing order flow, recommends the book "Algorithmic and High-Frequency Trading" by Irene Aldridge for individuals with a background in math and finance, and suggests using free or inexpensive bar data or data from multiple providers for low-frequency trading. He also clarifies that while each execution occurs on a specific trading venue, the aggregated trade data comprises trades from different exchanges.

The video further addresses questions about analyzing the strength of signals derived from aggregate order flow and accessing dark pools as a retail trader. The importance of thorough signal evaluation before making trading decisions based on aggregate order flow is emphasized. Moreover, the speaker highlights the necessity of obtaining a complete order log feed from exchanges to accurately determine market impact.

An audience question raises the topic of the relationship between order flow and volume, and how dark pools influence this relationship. Dr. Chan clarifies that order flow and volume are distinct measures, with order flow carrying a sign (positive or negative) while volume does not. Consequently, aggregating order flow over a specific period may yield a substantially smaller number compared to the corresponding volume, as orders with opposite signs cancel each other out. The speaker asserts that dark pools do not generate order flow and that volume data does not provide insights into dark pool activity.

The video concludes with a question regarding the potential application of reinforcement learning in MFT. Dr. Chan confirms that many individuals already employ this technique and underscores the importance of staying up to date with industry advancements.

The video offers valuable insights into MFT, its impact on trading, the challenges faced by traders, and strategies to optimize trading performance. The Q&A session provides further clarity on various aspects, addressing audience queries and expanding on the topics discussed.

  • 00:00:00 Dr. Ernest Chan, one of the industry experts in quantitative trading, discusses the significance of medium-frequency trading (MFT) and how it helped in understanding the flash crash of 2010. He explains that MFT is a crucial aspect of trading that traders need to be aware of, and they need to know which trading venues they need to submit their orders to. He also highlights the importance of understanding dark pools, learning about more complex order types such as ioc and ISO orders. Traders need to ask their brokers where they are routing their orders to, and whether it benefits them.

  • 00:05:00 The speaker defines medium-frequency trading (MFT) which has a latency of one to 20 milliseconds. He argues that all traders, regardless of their holding periods, are MFT traders because they all execute intraday trading within this frequency. Thus, traders need to learn about special order types, trading venues, and order optimization to minimize the impact of their orders and not lose out on profit. MFT is in the rhymes of intraday trading and traders face gaming by high-frequency traders, resulting in a thin book liquidity. As the U.S. Equity Market volume due to HFT activities has increased since 2010, traders need to be aware of market microstructure and its impact on their trading profits.

  • 00:10:00 The speaker discusses the complexities of trading in the US Equity Market, which is one of the most liquid pools of liquidity in the world. There are different order types and routing methods that affect one's profit, and information leakage when trading due to certain orders being executed revealing one's intentions to others. Moreover, flash crashes, withdrawal of liquidity, and illegal market manipulation are some of the other challenges that traders face. The speaker then provides a shocking example of how HFT activity has affected liquidity, showing a screenshot of Interactive Brokers where even a stock as liquid as Apple only has 100 shares of top-of-the-market liquidity during the trading day. This is due to market makers trying to avoid being picked apart by HFTs, which has led to a decrease in liquidity.

  • 00:15:00 The speaker discusses how HFT and market makers impact the liquidity of the market. Due to the games played by HFT, market makers do not pose large orders at the top of the book as they could be taken off in no time, causing them to lose money. This, along with much of the liquidity being hidden in dark pools, reduces the type of liquidity to such a size that it becomes irrelevant for backtesting strategies. Further, as much as one-third of shares in the U.S are traded in dark pools, making it difficult to judge if there is sufficient liquidity to execute a strategy. Lastly, the talk highlights the issue of flash crashes attributable to the ISO order type that allows an order to stay on a particular venue and sweep the other book, and how the market maker detected toxicity in the order flow and caused the price to drop precipitously.

  • 00:20:00 The speaker talks about various trading practices and issues in the industry, starting with Sarah, a UK retail trader who was convicted by the US federal court for illegal trading. He then delves into the concept of spoofing, where illegal trading practices can cause stock market crashes, despite skepticism among many traders. The speaker also discusses the use of dark pools and the issues surrounding them, explaining the flaws and potential manipulation that can occur. Finally, he touches upon the physical infrastructure required to reduce latency, including co-location, direct agency, and a trading platform to make the most of high-frequency trading.

  • 00:25:00 The speaker discusses the concept of order flow and its importance in trading. Every trade has a direction, and it matters because if an order is a buy market order, it has a positive sign, while an order initiated by a sale market order has a negative sign. Therefore, every execution has a sign, which can be used as a strong trading signal. Additionally, the speaker emphasizes that medium-frequency trading (MFT) is not just for people who want to trade at high frequency or for a specific market. It is for everyone who trades, as it can prevent losses and provide opportunities to benefit from flash crashes. The section ends with an announcement about a course on trading in milliseconds.

  • 00:30:00 The speaker discusses a new course on algorithmic trading strategies and shares a coupon code for users to access a 75% discount. The course is also part of Phi course learning track, which offers an additional 15% discount. The speaker then moves to a Q&A session, where Dr. Chan answers various questions, including whether brokers are required to route orders to the NBBO or directly to the exchange and how retail traders can benefit from these techniques. Dr. Chan explains that dark pools are accessible to anyone, and one can ask their broker to direct orders to a particular dark pool. Additionally, co-location at a data center is not as expensive as one might think, making it feasible for retail traders to take advantage of low latency.

  • 00:35:00 Dr. Chan discusses the importance of considering return on investment when it comes to trading styles and investments, stating that every investment should bring back more profit than what was invested. He also addresses questions about the impact of machine learning on medium-frequency trading (MFT), explaining that while it can be useful when processing data for high-level strategy development, it is not particularly useful for execution strategy. Additionally, he distinguishes between spoofing and order flow, stating that while the former is a matter of order manipulation, the latter is only concerned with executed trades and their corresponding buy or sell directives. Finally, he addresses questions about measuring order flow as an indicator and creating dark pools, stating that the easiest way to measure order flow is to have access to data with the aggressive flag of each trade and that dark pools are typically created by large brokerages and market makers.

  • 00:40:00 Dr. Chan answers several questions asked by viewers, including how to identify fake or unintended limit orders while analyzing order flow, what book he recommends for someone with a math and finance background to better understand the topic (Algorithmic and High-Frequency Trading by Irene Aldridge), what kind of data can be used for low-frequency trading (free or cheap bar data or data purchased from numerous providers), and whether the order flow of an asset is as per exchange or the total transaction of the asset from all exchanges (each execution happens in a specific trading venue, but when aggregated, the different trades will come from different exchanges). The course does not present a strategy prototype but proposes one that can be refined and improved, with numerous other materials covered in greater detail.

  • 00:45:00 The video discusses the limitations of full-on U.S. stocks and why trading in futures markets yields accurate results. The importance of transaction costs is also emphasized as traders aim to minimize them. The video also answers questions on topics such as how to access dark pools for retail traders and the usefulness of aggregate order flow for trading. The speakers emphasize the need to analyze the strength of signals before making trading decisions based on aggregate order flow. Lastly, viewers are directed to contact an expert for their questions regarding the course.

  • 00:50:00 The video addresses several audience questions around medium-frequency trading. The first question asks about the importance of optimizing parameters for aggregation, which is necessary to determine market impact and execute strategies effectively. Another question inquires about the possibility of differentiating orders coming from dark pools, but the speaker clarifies that dark pools don't display orders. The video also explains that trade data is not enough to compute order flow; it needs to come with an aggressive factor. Additionally, the video differentiates between order flow imbalance and order imbalance, stating that the latter only happens at the end of the U.S. stock market close. Regarding programming languages, the speaker recommends using any language for backtesting but using a high-performance language like C++ for trade execution. Finally, the video explains the importance of receiving a full order log feed from exchanges to determine market impact accurately.

  • 00:55:00 The speaker addresses a question about the relationship between order flow and volume, and how dark pools affect this relationship. The speaker explains that order flow and volume are different measures, with order flow having a sign (positive or negative) while volume does not. Therefore, aggregating order flow over a period of time could result in a much smaller number than the volume for the same period, as orders with opposite signs cancel each other out. The speaker also clarifies that dark pools do not generate order flow and that it is not possible to extract information about dark pools from the volume data. The section ends with a question about the potential of reinforcement learning in medium-frequency trading, to which the speaker responds that many people are already using this technique and highlights the importance of catching up with industry advancements.
Introduction to Medium-Frequency Trading: Trading in Milliseconds
Introduction to Medium-Frequency Trading: Trading in Milliseconds
  • 2023.04.18
  • www.youtube.com
This session provides an introduction to medium-frequency trading, which is an advanced trading style that operates at a higher frequency than traditional qu...