Quantitative trading - page 25

 

How to use Python trading Bot for Investment



How to use Python trading Bot for Investment

Join us in this informative webinar as we delve into the world of Python trading bots for investment purposes. Designed to cater to both novice and experienced traders, this video serves as a valuable resource for individuals interested in leveraging Python for algorithmic trading.

Throughout the webinar, you will gain practical insights and knowledge that will elevate your algo trading strategies. Python, with its extensive libraries and automation capabilities, offers immense potential to streamline and optimize your trading approach. By harnessing the power of Python, you can enhance your trading efficiency and capitalize on market opportunities.

Whether you are just starting your journey in algorithmic trading or seeking to refine your existing skills, this video provides a comprehensive overview of algorithmic trading with Python. It serves as a must-watch resource for traders and investors who aspire to stay ahead in today's dynamic financial landscape. Prepare to expand your understanding of Python's role in algorithmic trading and unlock new possibilities for success.

Topics covered:

  • Python environment and libraries
  • Building algorithmic trading python strategy
  • Backtesting the strategy on historical data
  • Implementing the strategy in the live market
  • Analysing the performance of the strategy
  • Q&A
How to use Python trading Bot for Investment
How to use Python trading Bot for Investment
  • 2021.06.29
  • www.youtube.com
In this webinar, you will be introduced to the concept of using a Python trading bot for investment purposes, which is a valuable resource for traders and in...
 

Optimal Portfolio Allocation Using Machine Learning



Optimal Portfolio Allocation Using Machine Learning

This session aims to teach you about the methods of Optimal Portfolio Allocation Using Machine Learning. Learn how to use algorithms that leverage machine learning at its core to make the capital allocation choice. Presented by a Vivin Thomas, VP, Quantitative Research, Equities (EDG) Modelling, JPMorgan Chase & Co.

In this discussion, we will explore the fascinating realm of algorithmic trading, specifically focusing on the utilization of machine learning algorithms. Our primary objective is to design sophisticated algorithms that leverage machine learning at their core to make optimal capital allocation choices.

To achieve this, we will develop a low-frequency strategy that excels in allocating its available capital among a carefully selected group of underliers, also known as basket assets, at regular intervals. By incorporating machine learning techniques, we aim to enhance the accuracy and efficiency of the capital allocation process.

Furthermore, we will construct Long-only, low-frequency, asset-allocation algorithms that operate within this framework. These algorithms will be designed to outperform a vanilla allocation strategy that relies solely on empirical momentum indicators for decision making. By comparing the performance of these algorithms against the benchmark strategy, we can assess the value and effectiveness of leveraging machine learning in the asset allocation process.

Through this exploration, we will gain insights into the potential benefits and advantages of incorporating machine learning algorithms into capital allocation strategies. Join us as we delve into the exciting world of algorithmic trading and discover how these advanced algorithms can revolutionize the way we approach asset allocation and investment decisions.

Optimal Portfolio Allocation Using Machine Learning
Optimal Portfolio Allocation Using Machine Learning
  • 2021.06.17
  • www.youtube.com
This session aims to teach you about the methods of Optimal Portfolio Allocation Using Machine Learning. Learn how to use algorithms that leverage machine le...
 

Sentiment Analysis Tutorial | Learn to Predicting Stock Trends & Use Statistical Arbitrage



Sentiment Analysis Tutorial | Learn to Predicting Stock Trends & Use Statistical Arbitrage

During this webinar, the presenter introduces three accomplished individuals, Design Vetii, Javier Cervantes, and Siddhantu, who have embarked on their journey in algorithmic trading through the E-PAT program. They will be sharing their E-PAT presentations and projects with the viewers, covering various topics and their experiences in the E-PAT program.

The presenter emphasizes that the flagship program E-PAT offers participants the opportunity to specialize in their preferred asset class or strategy paradigm for their project. This tailored approach allows participants to explore and develop expertise in their chosen area of focus.

It is highlighted that this session will be recorded and shared on YouTube and their blog, providing a valuable learning opportunity for aspiring quants and individuals interested in algorithmic trading. The presenter encourages viewers to take advantage of the knowledge shared by these experienced traders and the insights gained from their E-PAT projects.

The first presentation is delivered by Design Vetii, a fixed income dealer from South Africa. Design Vetii shares their project on predicting stock trends using technical analysis. They collected data from the top 10 stocks in the South African top 40 index spanning over a period of 10 years. Python was used to derive six common technical indicators from this data, which were then incorporated into a machine learning model for stock trend analysis. The presenter discusses their motivation and fascination with the field of machine learning throughout the project.

Moving on, the speaker discusses the investment strategy employed and presents the results of their machine learning algorithm. They utilized an equally weighted portfolio consisting of 10 stocks and implemented both daily and weekly rebalancing strategies. The daily rebalancing portfolio yielded a return of 44.69% over the past two and a half years, outperforming the top 40 benchmark return of 21.45%. Similarly, the weekly rebalancing portfolio showed significant outperformance, producing a return of 36.52% above the benchmark. The speaker acknowledges the time and effort required to fine-tune the machine learning model's parameters and highlights the learning experience gained from this process. However, they also recognize the limitations and potential flaws in solely comparing the strategy to technical indicators such as relative strength, Bollinger Bands, and MACD.

The speaker reflects on the lessons learned from their project and contemplates ways to improve it in the future. They mention the interest in exploring an index comprising the top 10 stocks and acknowledge a mistake made when using the shuffle attribute in their machine learning algorithm on a financial time series. The speaker expresses pride in their ability to code in Python and develop a strategy that combines machine learning and technical indicators. They propose incorporating fundamental factors like P ratios, sentiment analysis, and other markers in future projects, as well as exploring alternative machine learning models. Additionally, the speaker addresses questions from the audience regarding their choice of technical indicators and the implementation of the random forest algorithm.

Following the presentation, the presenter engages in a Q&A session with the viewers. Various questions are addressed, including inquiries about intraday trading strategies and recommended books for learning machine learning in the context of financial analysis. The presenter suggests a technical analysis book for understanding conventional indicators and also mentions the potential focus on incorporating unconventional views of indicators and fundamental factors into machine learning algorithms for future research.

After the Q&A, the presenter introduces the next speaker, Javier Cervantes, a corporate bond trader from Mexico with over eight years of experience in trading and credit markets. Javier shares his research on using statistical arbitrage to predict stock trends in the Mexican market, which is characterized by its small and concentrated market capitalization. He explains the attractiveness of this opportunity due to the absence of dedicated funds, limited liquidity generation from participants, and the competitive landscape for arbitrage strategies.

Javier discusses the process of building a database to collect information on Mexican stocks, outlining the challenges encountered, such as incomplete and faulty data, filtering and cleaning issues, and the assumptions underlying the strategy. To address these challenges, around 40% of the universe of issuers were removed, and stocks with low daily trading volumes were excluded.

The presenter then analyzes the results of Javier's statistical arbitrage strategy applied to six different stock pairs, which yielded positive results. The returns of the pairs showed low and mostly negative correlations, suggesting that diversification could significantly benefit the implementation of the strategy as an aggregate portfolio. When analyzing the results of a portfolio comprising all six pairs, the presenter highlights an annual growth rate of 19%, a maximum drawdown of only 5%, and an aggregate Sharpe ratio of 2.45, demonstrating significant superiority compared to individual pairs. Additionally, the presenter emphasizes several risks that should be considered before deploying real capital, including trading costs, different time horizons, market conditions, and the necessity of implementing a stop-loss strategy.

The speaker emphasizes the importance of regularly testing a statistical arbitrage strategy to ensure its reliability over time, as long-term relationships between pairs can break down even if initial stationarity is observed. They suggest the possibility of using machine learning algorithms to select eligible pairs for the trading strategy, rather than manually selecting them based on assumptions about different market sectors. The speaker concludes by mentioning that there is ample room for further research to enhance the model's efficiency and improve the reliability of returns. During the Q&A session, they address questions regarding the time period used in the data, the key takeaways from negative correlations among pairs' returns, and the feasibility of implementing an intraday strategy.

Finally, the presenter introduces Siddhantu, a trader who shares their project experience. Siddhantu begins by discussing their background as a trader and recounts an incident involving a medcap hotel chain stock that prompted them to question the impact of news and sentiment on stock prices. They outline their project, which is divided into three parts: news extraction, sentiment analysis, and trading strategy. Nvidia Corporation is chosen as the stock for the project due to its liquidity and volatility.

Siddhantu explains the process of gathering news articles using the newsapi.org database and extracting sentiment scores using the newspaper library in Python. The sentiment scores are then utilized to generate a long or short trading scheme based on extreme scores. The speaker shares the challenges faced during the programming phase but emphasizes the importance of selecting the right tools and receiving support from mentors to achieve success. While the results are encouraging, the speaker highlights the need to approach backtests with caution and acknowledges room for improvement in each step of the project. They recommend the Vader sentiment analyzer tool in Python for its accuracy in generating sentiment scores.

The speaker addresses sentiment analysis and its limitations when applied to news articles. They point out that while sentiment analysis can be effective in detecting sentiment in tweets and social media comments, it may not be suitable for news articles due to differences in reporting negative events. They also respond to audience questions regarding the sources used for sentiment analysis, the process of converting Vader scores into trading signals, the utilization of deep learning in sentiment analysis (which they haven't explored yet but recognize its potential), and other related topics.

Finally, the speaker delves into the data used for backtesting in the sentiment analysis program. They explain that around 10 to 15 impactful news articles were collected daily to calculate an average sentiment score for each day. The program utilized approximately six months' worth of these articles. For stock returns, day-level data for Nvidia's stock over six months was incorporated. The speaker clarifies that no fundamental or technical aspects of the stock were considered during the trades or backtesting, with trading signals solely derived from the sentiment score.

  • 00:00:00 The presenter introduces three accomplished individuals - Design Vetii, Javier Cervantes, and Siddhantu, who have embarked on their journey in algo trading through E-PAT. They will be sharing their E-PAT presentation and project with the viewers on various topics and their experience in the E-PAT program. The presenter mentions that the project opportunity in the flagship program E-PAT allows participants to specialize in their choice of asset class or strategy paradigm. Furthermore, the presenter highlights that this session, which will be recorded and shared on YouTube and their blog, will be a good learning opportunity for aspiring quants. The first presentation will be on predicting stock trends using technical analysis by Design Vetii, a fixed income dealer in South Africa.

  • 00:05:00 The presenter discusses his project which he submitted last year in the EPAP program. The aim of his project was to broaden the field of machine learning in the South African market and explore the integration of technical analysis with machine learning. He collected data from the top 10 stocks in the South African top 40 index, over 10 years, and used python to derive six common technical indicators. These indicators were then incorporated into a machine learning model for stock trend analysis. The presenter talks about his motivation and how fascinated he was with the field of machine learning.

  • 00:10:00 The speaker discusses the investment strategy he used and the results of his machine learning algorithm. He used an equally weighted portfolio of 10 stocks and looked at a daily and weekly rebalancing strategy. The daily rebalancing portfolio returned 44.69% over the last two and a half years compared to the top 40 benchmark, which returned 21.45%. Similarly, the weekly rebalancing portfolio outperformed the benchmark, producing a significant outperformance of 36.52%. The machine learning model took some time to fine-tune the parameters, but the speaker used this experience as a learning opportunity. However, the speaker also acknowledges that there were flaws in comparing the strategy to technical indicators such as the relative strength, bollinger band, and macd.

  • 00:15:00 The speaker reflects on what he learned from his project and how he could improve it in the future. He mentions how looking at an index consisting of the top 10 stocks would have been interesting, and how using the shuffle attribute in his machine learning algorithm on a financial time series was a mistake. The speaker notes that he's proud to have been able to code in Python and produce a strategy incorporating machine learning and technical indicators. For future projects, he suggests incorporating fundamentals such as P ratios, sentiment analysis, and other markers, as well as looking into other machine learning models. He also answers a question regarding his selection of technical indicators and the random forest algorithm.

  • 00:20:00 The presenter answers questions from viewers, including the strategy for intraday trading and the recommended books for learning machine learning for financial analysis. The presenter suggests a technical analysis book for learning conventional indicators and also mentions that incorporating unconventional views of indicators and fundamental ones into machine learning algorithms could be a potential focus for future research. After the Q&A, the presenter introduces the speaker, Javier Cervantes, a corporate bond trader from Mexico with over eight years of experience in trading and credit markets.

  • 00:25:00 The speaker discusses the motivation behind their research in using statistical arbitrage to predict stock trends in the Mexican market, which has a small and concentrated market capitalization. They explain that the lack of dedicated funds, participants generating liquidity, and competition for arbitrage strategies make it an attractive opportunity. The speaker then details how they built their database to collect information on Mexican stocks and the challenges they faced, such as incomplete and faulty data, filtering and cleaning, and the assumptions of the strategy. They ultimately removed around 40 percent of the universe of issuers and removed stocks with low daily trading volumes to address these issues.

  • 00:30:00 The presenter analyzes the results of his statistical arbitrage strategy applied to six different stock pairs, which produced positive results. He found that the correlation of the returns of the different pairs was low and mostly negative, suggesting diversification could greatly benefit the strategy's implementation as an aggregate portfolio. Upon analyzing the results of using a portfolio with all six pairs, the annual growth rate of the portfolio was 19, with a max drawdown of only 5 and an aggregate sharp ratio of 2.45, significantly superior to any single pair. The presenter also outlines several risks that need to be considered before putting any real money to work, including trading costs, different time horizons and market conditions, and the need for a stop-loss strategy.

  • 00:35:00 The speaker discusses the importance of regularly testing a statistical arbitrage strategy to ensure its reliability over time, as long-term relationships can break down even if pairs show stationarity initially. They also suggest the possibility of using machine learning algorithms to select pairs of stocks eligible for trading strategy, rather than manually selecting them based on assumptions about different market sectors. The speaker concludes by saying that there is still a lot of room for research to make the model more efficient and returns more reliable. During the Q&A session, they answer questions on the time period used in the data, the key takeaway from negative correlations on returns of different pairs, and the possibility of implementing an intraday strategy.

  • 00:40:00 The speaker introduces himself and discusses his experience as a trader. He explains how an incident with a medcap hotel chain stock led him to question the impact of news and sentiment on stock prices. He then shares his project experience, which he divided into three parts: news extraction, sentiment analysis, and trading strategy. The stock he chose for his project was Nvidia Corporation due to its liquidity and volatility.

  • 00:45:00 The speaker discusses the process of gathering news articles using the newsapi.org database and extracting sentiment scores using the newspaper library in Python. The sentiment score is then used to generate a long or short trading scheme based on extreme scores. The speaker faced some challenges with programming but received support from mentors and found that the key to success is selecting the right tools for the project. The results were encouraging, but the speaker emphasizes that backtests should be taken with a grain of salt. Additionally, there is always room for improvement in each step of the project. The speaker recommends the Vader sentiment analyzer tool in Python for its accuracy in generating sentiment scores.

  • 00:50:00 The speaker discusses sentiment analysis and its limitations when it comes to news articles. While sentiment analysis can be useful for detecting sentiment in tweets and social media comments, it is not suitable for news articles because news articles have a different way of reporting negative events. The speaker also answers a few questions related to the sources used for sentiment analysis, backtesting, converting Vader scores into trading signals and the use of deep learning in sentiment analysis. Though the speaker has not used deep learning for sentiment analysis yet, they acknowledge that it is worth exploring going forward.

  • 00:55:00 The speaker discussed the data used for backtesting in his sentiment analysis program. He gathered 10 to 15 impactful news articles per day to calculate an average sentiment score for an entire day, and his program used around six months' worth of these articles. On the stock returns front, he had day-level data for Nvidia's stock over six months. The speaker clarified that no fundamental or technical aspects of the stock were taken into consideration during the trades or while backtesting; trading signals were only created based on the sentiment score.
Sentiment Analysis Tutorial | Learn to Predicting Stock Trends & Use Statistical Arbitrage
Sentiment Analysis Tutorial | Learn to Predicting Stock Trends & Use Statistical Arbitrage
  • 2020.10.16
  • www.youtube.com
There are three tutorials by EPAT alumni in this session - first by Desigan Reddy, second by Javier Cervantes, and third by Siddhant Vaidya.-----------------...
 

Quant Trading | Strategies Explained by Michael Harris



Quant Trading | Strategies Explained by Michael Harris

In this tutorial, the concepts of market complexity and reflexivity are introduced and discussed. The focus is on specific regime changes that have occurred in U.S. equity markets and other markets. The presenter, Michael Harris, explores how these regime changes can impact strategy development and provides insights on minimizing their effects by adjusting data and strategy mix.

The tutorial is designed to be practical, allowing attendees to replicate the analysis on their own systems. Amibroker is used for analysis during the webinar, and attendees can download the Python code for further practice after the session.

Michael also shares a newly developed indicator that measures momentum and mean-reversion dynamic state changes in the market. The code for this indicator is provided, enabling attendees to incorporate it into their own trading strategies.

Michael Harris, the speaker, has a wealth of experience in trading commodity and currency futures spanning 30 years. He is the author of several books on trading, including "Short-Term Trading with Price Patterns," "Stock Trading Techniques Based on Price Patterns," "Profitability and Systematic Trading," and "Fooled by Technical Analysis: The Perils of Charting, Backtesting, and Data-Mining." He is also the author of the Price Action Lab Blog and the developer of DLPAL software. Michael holds two master's degrees, one in Mechanical Engineering with a focus on control systems and optimization, and another in Operations Research with an emphasis on forecasting and financial engineering from Columbia University.

The tutorial is divided into chapters, covering different aspects of market complexity and regime changes. The speaker's introduction sets the stage for the tutorial, followed by an overview of the topics to be covered. The index trading strategy is explained, highlighting its limitations in a quantitative claim. The mean-reversion strategy is then discussed, leading to a deeper exploration of regime changes and how they occur. The dynamics of mean reversion in the S&P market are analyzed, emphasizing the complexity present in financial markets.

The adverse effects of market complexity are addressed, underscoring the challenges it poses to traders. The tutorial concludes with a discussion on additional complexities in financial markets and provides resources for further exploration. A question and answer session follows, allowing attendees to clarify any doubts or seek further insights.

This tutorial provides valuable insights into market complexity, regime changes, and their implications for trading strategies, presented by an experienced trader and author in the field.

Chapters:

00:00 - Speaker Introduction

02:23 - Tutorial Overview

03:54 - Index Trading Strategy Explained

07:30 - Limitations of Quantitative claim

10:45 - Mean Reversion Strategy

11:38 - Regime Change

16:30 - How it Happens

18:17 - S&P Mean Reversion Dynamics

24:35 - Complexity in Financial Markets

26:42 - Adverse Effects

36:56 - More Complexity in Financial Markets

42:17 - Resources

43:35 - Q&A

Quant Trading | Strategies Explained by Michael Harris
Quant Trading | Strategies Explained by Michael Harris
  • 2020.10.09
  • www.youtube.com
This tutorial introduces the notions of market complexity and reflexivity. It covers details of specific regime changes in U.S. equity markets and elsewhere....
 

Algorithmic Trading | Full Tutorial | Ideation to Live Markets | Dr Hui Liu & Aditya Gupta



Algorithmic Trading | Full Tutorial | Ideation to Live Markets | Dr Hui Liu & Aditya Gupta

In this video, the speaker provides a comprehensive overview of the master class on ideating, creating, and implementing an automated trading strategy. The speaker, Aditya Gupta, introduces Dr. Hui Liu, a hedge fund founder and author of a python package that interacts with the Interactive Brokers API. He also mentions a surprise development related to the API that Dr. Liu will discuss.

The video begins by explaining the definition of automated trading and highlighting the three main steps involved in algorithmic trading. The speaker shares his personal journey of transitioning from discretionary to systematic trading using technical analysis.

The importance of analysis in algorithmic trading is emphasized, with a focus on three types of analysis: quantitative, technical, and fundamental. The various aspects of analysis involve studying historical charts, financial statements, micro and macroeconomic factors, as well as using mathematical models and statistical analysis to create trading strategies. These strategies are essentially algorithms that process data and generate signals for buying and selling. The process includes strategy development, testing, and paper trading before moving on to live trading. To connect with live trading, broker connectivity and an API are necessary, with iBridge PI discussed as a potential solution. The concept of the strategy spectrum is also introduced, showcasing different profit drivers and types of analysis.

The speakers delve into quantitative analysis and its role in creating trading strategies and portfolio management. They explain that quantitative analysis involves using mathematical models and statistical analysis to gain insights from historical data, which can be applied to develop quantitative trading strategies. Quantitative analysis is particularly useful for risk management and calculating take profit and stop loss levels for a strategy. They proceed to demonstrate the process of creating a simple moving average crossover strategy using libraries like pandas, numpy, and matplotlib, and calculating the strategy's return.

Different performance metrics used in algorithmic trading, such as the Sharpe ratio, compounded annual growth rate (CAGR), and maximum drawdown, are discussed. The importance of avoiding backtesting biases and common mistakes in the process is emphasized. The speakers also outline the skill set required for quantitative analysis, which includes knowledge of mathematics and statistics, interest in dealing with data, proficiency in Python coding, and an understanding of finance. They outline the process of automated trading strategy creation, starting from data sources and analysis, all the way to signal execution, and link it to the application programming interface (API). Dr. Hui Liu introduces himself, provides a brief background, and gives an overview of the upcoming topics on algorithmic trading with TD Ameritrade and Interactive Brokers using Python.

The speaker then focuses on the three cornerstones of algorithmic trading using the iBridgePy platform: real-time price display, historical data retrieval, and order placement. These three cornerstones serve as the building blocks for constructing complex strategies. The speaker presents three sample strategies: portfolio rebalancing, a buy low and sell high strategy, and a trend-catching strategy using moving average crossovers. The benefits of algorithmic trading, such as reduced pressure and fewer human errors, are highlighted. The speaker recommends investing time in researching good strategies rather than spending excessive effort on coding, utilizing a trading platform like iBridgePy. The flexibility to seamlessly switch between backtesting and live trading within the iBridgePy platform is also emphasized.

The video proceeds to discuss various brokers and Python platform options available for algorithmic trading. TD Ameritrade is introduced as a US-based brokerage firm offering an electronic trading platform with zero commission trading. Interactive Brokers is highlighted as a leading provider of API solutions, commonly used by smaller to medium-sized hedge funds for automating trading. Robinhood, another US-based brokerage, is mentioned for its commission-free trading and algo trading capabilities. The advantages of using the Python trading platform iBridgePy are explored, including the protection of traders' intellectual property, support for simultaneous backtesting and live trading, and compatibility with various package options. iBridgePy also facilitates trading with different brokers and managing multiple accounts.

The presenters discuss the need for effective tools for hedge fund managers to handle multiple accounts concurrently and introduce the hybrid trading platform called Average Pi. Average Pi is described as a combination of Contopian and Quantopian, enabling control of algorithms and Python-based trading. The process of downloading and setting up Average Pi on a Windows system is demonstrated, including the configuration of Interactive Brokers trading platform through Integrity Broker. The main entrance file of the package, runme.py, is showcased, requiring only two modifications: the account code and the selected strategy to execute.

Dr. Hui Liu and Aditya Gupta provide a tutorial on algorithmic trading, demonstrating how to show an account using an example. They explain the usage of the initialize and handle data functions within Average Pi, which offers various functions specifically designed for algorithmic trading. They illustrate how easy it is to code using the Average Pi platform.

The speaker dives into two topics: displaying real-time prices and retrieving historical data. For real-time prices, a demo is presented where the code is structured to print the timestamp and ask price every second using the handle data function. To fetch historical data for research purposes, the speaker explains the request historical data function and demonstrates how it can be used to retrieve a pandas data frame containing historical data, including open, high, low, close, and volume. The code structure is examined, and a demo is shown where the code is updated to retrieve historical data and print the output in the console.

The speaker demonstrates how to place a limit order to buy 100 shares of SPY at $99.95 when the ask price exceeds $100.01 in iBridgePy. The contract and share quantities to trade are defined, and the 'order' function is utilized to place the limit order. The speaker also demonstrates placing an order at the market price using the 'order status monitor' function to track the order's status. After showcasing these basic steps, the speaker explains that the next phase involves determining the contracts to trade and the frequency of trading decisions to construct trading strategies.

The steps involved in executing an algorithmic trading strategy are discussed. The need for regularly handling data and scheduling tasks using functions like the schedule function is explained. The process of calculating technical indicators is explored, which entails requesting historical data from a broker and utilizing pandas' data frame capabilities for calculations. Order types, such as market orders and limit orders, are examined, and a brief mention is made of incorporating stop orders into the code or algorithms.

The speaker then proceeds to explain a demonstration strategy for rebalancing a portfolio based on trading instructions, a popular approach among fund managers. The manual execution of trading instructions using Python dictionaries is demonstrated, and a simple code that schedules a trading decision daily and automatically rebalances the account using order target percentages is presented. A live demo is provided to showcase the process of rebalancing an account and viewing its position.

Three different trading strategies that can be implemented using Python are described. The first is a simple rebalancing strategy that allows users to monitor their position, shares, and cost basis. The second is a mean reversion strategy used to identify trading opportunities when the closing price is lower than the previous day's price. Lastly, a moving average crossover strategy is discussed, focusing on using historical data to calculate the crossover point for potential buy and sell opportunities. All three strategies involve making trading decisions before the market closes at specific times and using market orders to execute trades. The code for implementing all strategies is straightforward and easily implemented using Python and scheduling functions.

Dr. Hui Liu and Aditya Gupta explain how to use moving averages to determine when to buy or sell stocks in a portfolio. They demonstrate the implementation of this strategy using the Average Pi platform and then proceed to backtest it by applying historical data to evaluate its performance. The tutorial covers using the Test Me Py function within Hybrid Pi to input historical data for simulation and obtain results for account balance and transaction details.

The speaker explains how to view the simulation results of an algorithmic trading strategy by accessing the performance analysis chart. This chart displays the balance log and various statistics such as the Sharpe ratio, mean, and standard deviation, which can be further customized. The speaker emphasizes that Average Pi is capable of handling multiple accounts and rebalancing them. The platform is flexible, user-friendly, and can be utilized for setting up an algorithmic trading platform, backtesting, live trading, trading with different brokers, and managing multiple accounts. Additionally, the speaker invites viewers to explore their rent-a-coder service for coding assistance and subscribe to their YouTube channel for free tutorials.

The presenters discuss how iBridge by Interactive Brokers can be used for trading futures and options, along with other types of contracts. They explain that the Super Symbol feature allows for defining various types of contracts, such as stock options, filters, indexes, forex, and more. An example is given of a structured product being traded on the Hong Kong exchange, which is not a stock. The Super Symbol function enables trading any contract type other than stocks. Stop losses are briefly mentioned, highlighting how they can be incorporated into the code or built into an algorithm.

The presenters continue the discussion by highlighting the importance of risk management in algorithmic trading. They emphasize the need for implementing stop losses as a risk mitigation strategy to limit potential losses in case of adverse market movements. Stop losses can be integrated into the code or algorithm to automatically trigger the sale of a security when it reaches a predetermined price level.

Next, they delve into the concept of position sizing, which involves determining the appropriate quantity of shares or contracts to trade based on the available capital and risk tolerance. Proper position sizing helps manage risk and optimize returns by ensuring that the allocation of capital aligns with the trader's risk management strategy.

The speakers also touch upon the significance of performance evaluation and monitoring in algorithmic trading. They discuss various performance metrics used to assess the effectiveness of trading strategies, including the Sharpe ratio, compounded annual growth rate (CAGR), and maximum drawdown. These metrics provide insights into the risk-adjusted returns, long-term growth, and potential downside risks associated with the strategy.

To avoid common pitfalls and biases in backtesting, the presenters highlight the importance of ensuring data integrity and using out-of-sample testing. They caution against over-optimization or "curve fitting," which refers to tailoring a strategy too closely to historical data, leading to poor performance in live trading due to the strategy's lack of adaptability to changing market conditions.

The speakers stress that successful algorithmic trading requires a combination of skills and knowledge. They mention the necessity of having a solid foundation in mathematics and statistics, an interest in working with data, proficiency in coding using Python, and a good understanding of financial markets. They encourage individuals interested in algorithmic trading to continuously expand their knowledge and skill set through learning resources and practical application.

In the concluding segment of the video, Dr. Hui Liu introduces himself and shares his background as a hedge fund founder and an author of a Python package that interacts with the Interactive Brokers API. He briefly discusses upcoming topics related to algorithmic trading with TD Ameritrade and Interactive Brokers using Python, setting the stage for further exploration of these subjects in future master classes.

The video provides a comprehensive overview of algorithmic trading, covering the journey from ideation to implementation of automated trading strategies. It highlights the importance of analysis, discusses different types of analysis (quantitative, technical, and fundamental), and explores various aspects of strategy development, testing, and execution. The speakers demonstrate the practical application of Python-based platforms like iBridgePy and Average Pi, showcasing their capabilities in real-time price tracking, historical data retrieval, order placement, and portfolio rebalancing.

  • 00:00:00 The video presents a preview of what the master class will cover, which is the journey of ideating, creating and implementing an automated trading strategy. The speaker, Aditya Gupta, introduces Dr Hui Liu, a hedge fund founder and an author of a python package that interacts with the Interactive Brokers API, and mentions a surprise development related to the API that Dr Liu will talk about. The video then covers the definition of automated trading, the three main steps of algorithmic trading, and the speaker's personal journey of switching from discretionary to systematic trading using technical analysis.

  • 00:05:00 The importance of analysis in algorithmic trading is discussed, with three types of analysis mentioned: quantitative, technical, and fundamental. The different types of analysis involve studying historical charts, financial statements, micro and macroeconomic factors, and using mathematical models and statistical analysis to create a strategy. The strategy is an algorithm that takes in data and provides signals for buying and selling. The process involves developing and testing the strategy, and paper trading before moving on to live trading. To connect with live trading, broker connectivity and an API are needed, and I Bridge PI is discussed as a potential solution. The strategy spectrum is also presented, with various profit drivers and types of analysis shown.

  • 00:10:00 The speakers discuss quantitative analysis and its use in creating trading strategies and portfolio management. They explain that quantitative analysis involves using mathematical models and statistical analysis to understand historical data and turn it into insights that can be used to create quantitative trading strategies. Compared to other forms of analysis, quantitative analysis is particularly useful for risk management and calculating take profit and stop loss levels for a strategy. They then walk through the process of creating a simple moving average crossover strategy using libraries like pandas, numpy, and matplotlib, and calculating the strategy return.

  • 00:15:00 The speakers discuss the different performance metrics such as sharp ratio, compounded annual growth rate (CAGR), and maximum drawdown used in algorithmic trading. They also emphasize the importance of avoiding backtesting biases and common mistakes in the process. Furthermore, they suggest that quant analysis requires knowledge of mathematics and statistics, interest in dealing with data, knowledge of coding in Python language, and understanding of finance. They also outline the process of automated trading strategy creation from data sources and analysis to signal execution and link it to the application programming interface (API). Lastly, Dr. Hui Liu introduces himself and his background and briefly discusses the upcoming topics on algorithmic trading with TD Ameritrade and Interactive Brokers using Python.

  • 00:20:00 The speaker discusses the three cornerstones of algorithmic trading using the iBridgePy platform: showing real-time price, getting historical data, and placing orders. These three cornerstones can be used to build complicated strategies, and the speaker gives three sample strategies: rebalancing portfolios, a buy low and sell high strategy, and a catch the trend strategy using moving average crossovers. The benefits of algo trading include less pressure and fewer human errors, and the speaker recommends spending time on researching good strategies rather than coding, using a trading platform like iBridgePy. Backtesting and live trading can be easily switched in the iBridgePy platform.

  • 00:25:00 The video discusses the different brokers and python platform options available for algorithmic trading. For brokers, TD Ameritrade is a US-based brokerage firm that offers an electronic trading platform with zero commission trading, while Interactive Brokers provides the best API solution in the industry and most smaller to medium hedge funds use it to automate their trading. Robinhood is a US-based brokerage that is also commission-free and offers algo trading. The video then discusses the advantages of using the python trading platform, iBridgePy, such as protecting traders' intellectual property, supporting backtesting and live trading together, and allowing the use of any pattern packages. Additionally, iBridgePy supports the use of different brokers and can manage multiple accounts.

  • 00:30:00 The presenters discuss the need for a good tool for headphone managers to manage multiple accounts at the same time, and introduce the hybrid trading platform called Average Pi. They explain that Average Pi is a hybrid of Contopian and Quantopian, supporting controlling algorithms and Python organ trading. The presenters demonstrate how to download and set up Average Pi on a Windows system, including how to use integrity broker and configure the interactive brokers trading platform. They also show the main entrance file of the package, runme.py, which only needs two changes: the account code and the selected strategy to run.

  • 00:35:00 Dr Hui Liu and Aditya Gupta give a tutorial on algorithmic trading and demonstrate how to show an account using an example. They show how to use the initialize and handle data functions in Average Pi, a platform that offers different functions to use in algorithmic trading. They also show how to code to show real-time prices using the example of printing the ask price of the SPY ETF that tracks the S&P 500 index. Through their demonstration, they make it clear how easy it is to code using the Average Pi platform.

  • 00:40:00 The speaker discusses two topics: showing real-time prices and fetching historical data. For real-time prices, a demo is shown where the code is structured to print the timestamp and ask price every second using the handle data function. To fetch historical data for research purposes, the speaker explains the use of the request historical data function and demonstrates how it can be used to return a pandas data frame of historical data with open, high, low, close, and volume. The code structure is discussed, and a demo is shown where the code is updated to fetch historical data and the output is printed in the console.

  • 00:45:00 The speaker demonstrates how to place a limit order to buy 100 shares of SPY at 99.95 when the ask price is greater than $100.01 in IBridgePy. They define the contact to trade and the shares to buy, and use the 'order' function to place the limit order. The speaker also shows how to place an order by market price using the 'order status monitor' function to monitor the status of the order. After demonstrating these basic steps, the speaker explains that the next step is to determine the contracts to trade and how often to make trading decisions to build trading strategies.

  • 00:50:00 The speaker discusses the steps involved in executing an algorithmic trading strategy. They begin by explaining the need for handling data regularly and scheduling tasks using functions called schedule function to schedule actions. They also discuss the process of calculating technical indicators, which involves requesting historical data using a broker and using panda's data frame to make calculations. After that, they delve into order types such as market order and limit order and briefly touch on how to use stop orders. The speaker then moves on to explain the demo strategy of rebalancing a portfolio based on trading instructions, which is a popular approach used by fund managers. They demonstrate how to manually execute trading instructions using Python dictionaries and present a simple code that schedules a trading decision every day and automatically rebalances the account using order target percentage. Finally, they provide a live demo of how to rebalance an account and view its position.

  • 00:55:00 The speaker describes three different trading strategies that can be implemented using Python. The first is a simple rebalancing strategy that allows the user to see their position, shares, and cost basis. The second is a mean reversion strategy that is used to identify trading opportunities when the closing price is lower than the previous day's price. Finally, a moving average crossover strategy is discussed with a focus on using historical data to calculate the crossover point for potential buy and sell opportunities. All three strategies involve making trading decisions at a specific time before the market closes and using market orders to execute trades. The code for all strategies is straightforward and easy to implement using Python and scheduling functions.

  • 01:00:00 Dr. Hui Liu and Aditya Gupta explain how to use moving averages to buy or sell stocks in a portfolio. They demonstrate how to implement this strategy using the Average Pi platform and then backtest it by applying historical data to see how well it performs. The tutorial walks through how to use the Test Me Py function within Hybrid Pi to input historical data for simulation and output results for the account balance and transaction details.

  • 01:05:00 The speaker explains how to view the simulation results of an algorithmic trading strategy by accessing the performance analysis chart. The chart displays the balance log and statistics, such as sharp ratio, mean, and standard deviation, which can be customized further. The speaker also highlights how Average Pi can handle multiple accounts and rebalance them. The platform is flexible, easy to use, and can be utilized to set up an algorithmic trading platform, backtest, and live trade together, trade with different brokers, and manage multiple accounts. The speaker also invites viewers to check out their rent-a-coder service for coding assistance and subscribe to their YouTube channel for free tutorials.

  • 01:10:00 The presenters discuss how iBridge by Interactive Brokers can be used for trading futures and options, along with other types of contracts. They explain that the Super Symbol feature allows for defining more types of contracts, such as stock options, filters, indexes, forex, and more. They give an example of a structured product being traded on the Hong Kong exchange, which is not a stock. The Super Symbol function makes it possible to trade any contracts other than stock. They also briefly discuss stop losses and how they can be incorporated into the code or built into an algorithm.
Algorithmic Trading | Full Tutorial | Ideation to Live Markets | Dr Hui Liu & Aditya Gupta
Algorithmic Trading | Full Tutorial | Ideation to Live Markets | Dr Hui Liu & Aditya Gupta
  • 2020.10.02
  • www.youtube.com
In this hands-on masterclass, Dr Hui Liu and Aditya Gupta explained how to create an algorithmic trading strategy and implement it in live markets. They expl...
 

Long Term Enterprise Valuation Prediction by Prof S Chandrasekhar | Research Presentation



Long Term Enterprise Valuation Prediction by Prof S Chandrasekhar | Research Presentation

Professor S. Chandrasekhar is a senior professor and the Director of Business Analytics at IFIM Business School in Bangalore. With over 20 years of experience in academia, he has held positions such as Chair Professor Director at FORE School of Management in New Delhi and Professor at the Indian Institute of Management in Lucknow. He holds a Bachelor's degree in Electrical Engineering, a Master's degree in Computer Science from IIT Kanpur, and a Doctorate in Quantitative & Information Systems from the University of Georgia, USA.

In this presentation, Professor S. Chandrasekhar focuses on predicting the long-term Enterprise Value (EV) of a company using advanced machine learning and natural language processing techniques. Unlike market capitalization, which primarily considers shareholder value, Enterprise Value provides a more comprehensive valuation of a company by incorporating factors such as long-term debt and cash reserves.

To calculate the EV, market capitalization is adjusted by adding long-term debt and subtracting cash reserves. By predicting the enterprise value up to six months in advance on a rolling basis, this approach can assist investors and rating companies in gaining a long-term perspective on investment growth and managing associated risks.

Long Term Enterprise Valuation Prediction by Prof S Chandrasekhar | Research Presentation
Long Term Enterprise Valuation Prediction by Prof S Chandrasekhar | Research Presentation
  • 2020.09.30
  • www.youtube.com
Application of AI & News Sentiment in Finance [Research Presentations]Topic 2: Long Term Enterprise Valuation Prediction by Prof S ChandrasekharThe talk focu...
 

Credit Risk Modeling by Dr Xiao Qiao | Research Presentation



Credit Risk Modeling by Dr Xiao Qiao | Research Presentation

Good morning, good afternoon, good evening. My name is Vedant, and I am from Quantum C. Today, I have the pleasure of being your host for this event. We are joined by Dr. Xiao, a co-founder of Parachronic Technologies, who will be sharing his expertise on credit risk modeling using deep learning. Dr. Xiao's research interests primarily revolve around asset pricing, financial econometrics, and investments. He has been recognized for his work by esteemed institutions such as Forbes, CFA Institute, and Institutional Investors. Additionally, Dr. Xiao serves on the editorial board of the Journal of Portfolio Management and the Global Commodities Applied Research Digest. He holds a PhD in Finance from the University of Chicago.

During this session, Dr. Xiao will delve into the topic of credit risk modeling and explore the applications of deep learning in this field. He will discuss how deep learning can be utilized to price and calibrate complex credit risk models, particularly focusing on its efficacy in cases where closed-form solutions are not available. Deep learning offers a conceptually simple and efficient alternative solution in such scenarios. Dr. Xiao expresses his gratitude for being a part of the Quan Institute 10-year anniversary and is excited to share his insights.

Moving forward, the discussion will center around the credit market, specifically the massive scale of the market and the increasing importance of credit default swaps (CDS). With an estimated CDS notional outstanding value of around 8 trillion as of 2019, the market has been steadily growing. The CDS index notional has also experienced substantial growth, reaching almost 6 trillion in recent years. Moreover, the global bond market exceeds a staggering 100 trillion dollars, with a significant portion comprising corporate bonds that carry inherent credit risk due to the potential default of the issuing institutions.

As credit markets evolve and become more complex, credit risk models have also become increasingly intricate to capture the dynamic nature of default risk. These models often employ stochastic state variables to account for the randomness present in financial markets across different time periods and maturities. However, the growing complexity of these models has made their estimation and solution computationally expensive. This issue will be a focal point later in the presentation.

Machine learning, with its transformative impact on various fields, including finance, has gained prominence in recent years. It is increasingly being employed in empirical finance, such as cross-sectional asset pricing and stock portfolio construction. Notably, deep learning has been used to approximate derivatives pricing and options pricing, as well as to calibrate stochastic volatility models. In this paper, Dr. Xiao and his colleague, Gerardo Munzo from Kempos Capital, propose applying deep learning to credit risk modeling. Their research demonstrates that deep learning can effectively replace complex credit risk model solutions, resulting in efficient and accurate credit spread computation.

To provide further context, Dr. Xiao introduces the concept of credit risk modeling. He explains that the price of a defaultable bond is determined by the probability-weighted average of the discounted cash flows in both default and non-default scenarios. The probability of default is a crucial quantity in credit risk models as it quantifies the likelihood of default. Two main types of credit risk models exist: structural models and reduced-form models. Structural models establish a direct link between default events and the capital structure of an entity. On the other hand, reduced-form models represent default risk as a statistical process, typically utilizing a Poisson process with a default intensity parameter. Dr. Xiao highlights that credit risk models involve solving for pricing functions to derive credit spreads, which can be computationally intensive due to the need for numerical integration and grid searches.

This is where deep learning enters the picture. Dr. Xiao proceeds to explain neural networks and deep learning, illustrating how they can be applied to credit risk modeling. Neural networks introduce non-linearity.

Neural networks, a fundamental component of deep learning, consist of interconnected layers of artificial neurons that mimic the structure of the human brain. These networks can learn complex patterns and relationships from data through a process known as training. During training, the network adjusts its internal parameters to minimize the difference between predicted outputs and actual outputs, thereby optimizing its performance.

Dr. Xiao explains that deep learning can be leveraged to approximate complex credit risk models by training neural networks on historical data. The neural network learns the mapping between input variables, such as economic and financial factors, and the corresponding credit spreads. Once trained, the network can be used to estimate credit spreads for new input data efficiently.

One of the key advantages of using deep learning in credit risk modeling is its ability to approximate complex pricing functions. Traditionally, credit risk models employ numerical integration techniques and grid searches to solve for pricing functions, which can be computationally demanding and time-consuming. Deep learning offers a more efficient alternative by directly approximating the pricing function through the neural network's learned mapping.

Dr. Xiao highlights that deep learning models can capture non-linear relationships and interactions between input variables, which are often present in credit risk models. This flexibility allows the neural network to adapt to the complexities of credit markets and generate accurate credit spread estimates.

Furthermore, deep learning models can handle missing or incomplete data more effectively compared to traditional methods. They have the capability to learn from available data and make reasonable predictions even in the presence of missing information. This is particularly beneficial in credit risk modeling, where data may be sparse or contain gaps.

To validate the efficacy of deep learning in credit risk modeling, Dr. Xiao and his colleague conducted extensive empirical experiments using a large dataset of corporate bonds. They compared the performance of deep learning-based credit spread estimates with those obtained from traditional credit risk models. The results demonstrated that deep learning models consistently outperformed traditional models in terms of accuracy and computational efficiency.

Dr. Xiao concludes his presentation by emphasizing the transformative potential of deep learning in credit risk modeling. He highlights the efficiency, accuracy, and flexibility of deep learning models in approximating complex credit risk models, particularly in cases where closed-form solutions are unavailable or computationally demanding.

Following the presentation, the floor is open for questions from the audience. Attendees can inquire about specific applications of deep learning in credit risk modeling, data requirements, model interpretability, and any other relevant topics. Dr. Xiao welcomes the opportunity to engage with the audience and provide further insights based on his expertise and research findings.

Q&A session after Dr. Xiao's presentation:

Audience Member 1: "Thank you for the informative presentation, Dr. Xiao. I'm curious about the interpretability of deep learning models in credit risk modeling. Traditional models often provide transparency into the factors driving credit spread estimates. How do deep learning models handle interpretability?"

Dr. Xiao: "That's an excellent question. Interpreting deep learning models can be challenging due to their inherent complexity. Deep neural networks operate as black boxes, making it difficult to directly understand the internal workings and interpret individual neuron activations. However, there have been ongoing research efforts to enhance interpretability in deep learning."

"Techniques such as feature importance analysis, gradient-based methods, and attention mechanisms can help shed light on the factors influencing the model's predictions. By examining the network's response to different input variables, we can gain insights into their relative importance in determining credit spreads."

"Additionally, model-agnostic interpretability methods, such as LIME (Local Interpretable Model-Agnostic Explanations) or SHAP (SHapley Additive exPlanations), can be applied to deep learning models. These methods provide explanations for individual predictions by approximating the model locally around a specific input."

"It's important to note that while these techniques offer some level of interpretability, the primary strength of deep learning models lies in their ability to capture complex patterns and relationships in the data. The trade-off between interpretability and model performance is a consideration in credit risk modeling, and researchers are actively exploring ways to strike a balance between the two."

Audience Member 2: "Thank you for the insights, Dr. Xiao. I'm curious about the data requirements for training deep learning models in credit risk modeling. Could you elaborate on the quantity and quality of data needed?"

Dr. Xiao: "Certainly. Deep learning models typically benefit from large amounts of data for effective training. In credit risk modeling, having a diverse and comprehensive dataset is crucial to capture the complexities of credit markets."

"Data for training deep learning models should include a variety of economic and financial indicators, such as macroeconomic factors, industry-specific variables, historical credit spreads, and relevant market data. The more diverse and representative the dataset, the better the model can generalize to new credit risk scenarios."

"Regarding data quality, it's important to ensure the accuracy, consistency, and relevance of the input variables. Data preprocessing techniques, such as data cleaning, normalization, and feature engineering, play a vital role in preparing the dataset for training. Removing outliers, addressing missing values, and scaling the data appropriately are crucial steps in ensuring reliable model performance."

"Furthermore, maintaining up-to-date data is essential, as credit risk models need to adapt to changing market conditions. Regular updates and monitoring of the data quality and relevance are necessary to ensure the ongoing accuracy of the deep learning models."

These were just a couple of questions from the audience, but the Q&A session continues with various other inquiries and discussions on topics such as model robustness, potential limitations of deep learning in credit risk modeling, and real-world implementation challenges. Dr. Xiao actively engages with the audience, sharing his expertise and insights gained from his research.

Credit Risk Modeling by Dr Xiao Qiao | Research Presentation
Credit Risk Modeling by Dr Xiao Qiao | Research Presentation
  • 2020.09.30
  • www.youtube.com
Application of AI & News Sentiment in Finance [Research Presentations]Topic 1: Credit Risk Modeling by Dr Xiao QiaoDeep learning can be used to price and cal...
 

What impacts a Quant Strategy? [Panel Discussion] - Sep 24, 2020



What impacts a Quant Strategy? [Panel Discussion] - Sep 24, 2020

During the panel discussion on alpha-seeking strategies in finance, Nicholas argues that it is incredibly difficult to create alpha in mutual funds and hedge funds, stating that 99% of investors should not actively seek alpha positions. He highlights the challenges of generating alpha in market-neutral hedge funds and suggests that factor investing is a more viable option for outperforming the market.

The panel agrees with Nicholas and emphasizes the importance of finding unique data sources and using them to develop a systematic strategy in factor investing. They believe that this approach is key to successful alpha generation. They also discuss the difficulty of achieving true alpha in the current market and suggest alternative strategies such as asset allocation and risk management.

The panel advises against solely focusing on seeking alpha and suggests looking at niches within the market that are less covered and, therefore, less efficient. They emphasize the importance of constructing a well-built portfolio benchmark, such as beta strategies, and encourage investors to look beyond the S&P 500 to find potentially profitable stocks.

The panelists caution that even if alpha is identified, it may not be possible to harvest it due to potential conflicts with prime brokers. They also discuss the benefits of trading assets that are not part of the main investment universe in futures or are not part of the manager's mandate. Such assets are often less crowded, resulting in higher Sharpe ratios compared to assets that are well-known in the market. However, they acknowledge that trading these assets may require a smaller portfolio size and incur higher fees due to their lower liquidity and increased trading effort.

Laurent agrees with Nicholas's view that traditional active management strategies, such as picking stocks on the long side, have never worked well. He believes that the burden of proof has shifted to active managers to demonstrate their ability to evolve and perform in difficult markets.

The panel also discusses the importance of considering the short side of a long-short investment strategy. They emphasize the need for risk management and stress testing the strategy through extensive backtesting, including examining the impact of transaction costs and market structure changes. The panel recommends spending ample time with the strategy to identify the few that survive the validation process.

The discussion moves on to the practical implications and visualization of strategies for alpha generation. The panel acknowledges the value of academic research but notes that it often lacks practical implications and implementation details. They stress the importance of creating strategies that can be executed from a portfolio perspective, survive transaction costs, and align with clients' expectations. Visual representation, such as charts illustrating trading drawdowns, is preferred over tables as it helps investors hold onto strategies during significant drawdowns.

The speaker highlights the importance of building a strategy that aligns with the client's objectives and is synchronized with economic and fundamental reasons. They emphasize the need for simplicity and explainability, stating that a strategy should be able to be summarized in a few simple sentences. Backtesting is not solely meant to prove that a strategy works but to test its resilience by pushing its limits.

The panel reflects on the impact of quant strategies and identifies mean reversion and trend following as the two fundamental strategies regardless of asset class or time frame. They compare trend following to buying lottery tickets, with low win rates and high volatility, and highlight mean reversion as a strategy that generates one dollar at a time with high win rates and low volatility. They discuss the importance of managing losses and optimizing gain expectancy by tilting and blending these strategies. They also touch on the challenges of short selling and riding the tail of institutional holders.

Risk management takes center stage in the discussion, with the panel emphasizing the need for positive expectancy in stock market strategies. They consider the stock market as an infinite, random, and complex game and suggest blending high win rate trades with lottery tickets to mitigate potential losses. The panel also discusses when to retire a strategy, highlighting the importance of staying current with research and considering structural changes or market fluctuations that could impact a strategy. Retiring a strategy should only occur after thorough research and framework changes.

The panel addresses the difficulties of managing multiple investment strategies and dealing with underperforming strategies. They stress the importance of sticking to the investment mandate and understanding clients' expectations. The panel suggests having a process for finding new strategies and implementing them while knowing when to retire strategies that are not performing well. They discuss two approaches to handling underperforming strategies, either holding onto them for a long-term view or using trend following techniques and removing them from the portfolio. The decision depends on the specific mandate and funding of the multi-strategy, multi-asset fund.

The panelists highlight the challenges of quant investing and the importance of having faith in the work done, regardless of the amount of research. They mention the possibility of morphing strategies into better ones and emphasize the scarcity of truly diversifying strategies. They also touch on shorting stocks, such as Tesla, and note that shorting a stock is essentially shorting an idea or belief, particularly in valuation shorts that are based on a story. They provide an example from Japan in 2005, where a consumer finance company had a stratospheric valuation but remained a peaceful short until it eventually went bankrupt a few years later.

The speakers discuss the pitfalls of shutting down a strategy based on surreal valuations that don't align with traditional expectations. They mention companies like Tesla, whose market cap has exceeded that of larger companies like Toyota. The panelists stress the importance of symmetry in having the same rules for both the short and long sides, although they acknowledge that it is more challenging. They believe that many strategies can be improved, and even different asset classes are, in essence, a bet on economic growth.

The panel also discusses the difficulty of finding strategies that truly diversify and benefit from financial uncertainty and volatility. They highlight the limitations of classic hedge fund strategies in this regard and recommend aspiring quants to think in templates and be willing to discard strategies that don't work. They suggest that retail investors focus on low-cost diversified ETFs and prioritize risk management.

The panel concludes the discussion by addressing the efficiency of financial markets and the challenges individual investors face when competing against professionals. They recommend using academic research papers as inspiration rather than gospel and finding ideas that are not mainstream to avoid excessive correlation with the broader market. They provide their Twitter handles, LinkedIn profiles, and websites for those interested in exploring their work further.

The panel delves into various aspects of alpha-seeking strategies, highlighting the difficulties, alternative approaches, risk management considerations, and the importance of practical implications and visualization. Their insights provide valuable guidance for investors and quants navigating the complex landscape of finance.

  • 00:00:00 The panelists discuss the concept of alpha-seeking strategies in finance. Nicholas argues that 99% of investors should not look for alpha-seeking positions as the evidence shows that it is incredibly tough to create alpha in mutual funds and hedge funds. He highlights the difficulty of generating alpha in market-neutral hedge funds and suggests that factor investing is a more viable option for those seeking to outperform the market. The panel agrees that finding unique data sources and using them to develop a systematic strategy is the key to successful factor investing.

  • 00:05:00 The panelists discuss the difficulty of achieving true alpha in the current market and suggest alternative strategies, such as asset allocation and risk management. They advise against focusing solely on seeking alpha and suggest looking at niches within the market that are less covered and therefore less efficient. Additionally, the panelists emphasize the importance of constructing a well-built portfolio benchmark, like beta strategies, and looking beyond the S&P 500 to find potentially profitable stocks. They caution that even if alpha is identified, it may not be able to be harvested due to potential conflicts with prime brokers.

  • 00:10:00 The panel discusses the benefits of trading assets that are not part of the main investment universe in futures or are not part of the manager's mandate. The reason is that such assets are less crowded and therefore have higher sharp ratios of around 50% to 100% higher than those built on assets that are well-known in the market. The discussion also touches on the issues of portfolio size and fees, where these assets would require a smaller portfolio size and fees because they are less liquid and require more effort to trade. Laurent agrees with Nicola's view that the traditional active management strategy of picking stocks on the long side has never worked, and the burden of proof has shifted to active managers to prove their ability to evolve and perform in difficult markets.

  • 00:15:00 The panel discusses the importance of considering the short side of a long-short investment strategy. They point out that while investors can handle usury fees and claims on a pound of flesh on the long side, they cannot stomach the costs associated with protecting capital or generating alpha during market downturns. They emphasize the need for risk management and stress-test the strategy through extensive backtesting, including examining the impact of transaction costs and market structure changes. The panel recommends spending ample time with the strategy to identify the few that survive the validation process.

  • 00:20:00 The panel discusses the importance of practical implications and visualization of strategies when it comes to alpha-generation. While academic research is valuable, it often lacks practical implications, such as how a strategy can be executed from a portfolio perspective and its ability to survive transaction costs and implementation. Additionally, investors prefer strategies that have charts over tables since it visually shows the trading drawdowns and makes it easier to hold onto during a 30% drawdown. The panel also emphasizes the importance of creating a strategy that is synchronized with what clients/bosses expect and being able to explain why the strategy is underperforming the benchmark in a booming market. Investors tend to have little patience for alpha-generating strategies, so it is crucial to make sure the strategy is implementable and can be distributed as a product.

  • 00:25:00 The speaker emphasizes the importance of building a strategy that aligns with what the client is looking for and is synchronized with economic and fundamental reasons. The speaker highlights the need for simplicity and explainability in the strategy, stating that it should be able to be explained in a few simple sentences. The purpose of backtesting is not to prove that a strategy works, but to break it and see if it still produces alpha. The trading rules are not as important as the theory behind the strategy, which should be tested to ensure that it can withstand anything that could break it.

  • 00:30:00 The panel of experts discuss what impacts a Quant strategy. They reflect on the fact that mean reversion and trend following are the only two strategies regardless of asset class or time frame. While trend following is like buying lottery tickets, with a low win rate and high volatility, mean reversion makes one dollar at a time and has a high win rate and low volatility. The experts also discuss the importance of managing losses and consider how to tilt and blend these strategies to optimize gain expectancy. Finally, they touch on the challenges of short selling and riding the tail of institutional holders.

  • 00:35:00 The panel discusses the importance of risk management and the need to have a positive expectancy when it comes to strategies in the stock market. The speaker believes that the stock market is an infinite, random, and complex game and that it's essential to blend high win rate trades with lottery tickets to reduce potential loss. The panel also discusses when to retire a strategy, and while they agree that it should be avoided, it's crucial to stay current and research any structural changes or market fluctuations that could impact a strategy. Ultimately, retiring a strategy should only occur after thorough research and framework changes.

  • 00:40:00 The panel discussed the difficulties in managing multiple investment strategies and how to handle underperforming strategies. They emphasized the importance of sticking to your investment mandate and understanding clients' expectations. It's crucial to have a process for finding new strategies and implementing them, but also knowing when to retire strategies that are not performing well. The panel talked about two ways of handling underperforming strategies, either holding onto them for a long-term view or doing trend following and retiring them from the portfolio. Ultimately, it depends on the mandate and funding of the multi-strategy, multi-asset fund in question.

  • 00:45:00 The panelists discuss the difficulty of quant investing and how it requires faith in the work done, regardless of the amount of research. Retiring strategies makes sense when they underperform but looking at momentum may help determine why a strategy is working well. The panelists note that diversification is key and cutting a strategy is not easy when managing multiple strategies. They also discuss shorting names such as Tesla and note that shorting a stock is actually shorting an idea or belief because valuation shorts are based on a story. The panelists give a precise example from Japan in 2005 where the valuation for a consumer finance company was stratospheric, but it was a peaceful short until the company went bankrupt a few years later.

  • 00:50:00 The speakers discuss the pitfalls of shutting down a strategy due to a surreal valuation that doesn't work. Once a company's value has reached a certain point, it can go as much as it wants, like Tesla, whose market cap is larger than Toyota's. The speakers also talk about the importance of symmetry in having the same rules for both the short and long side, which is much harder but avoids conflicts and manual overrides. They believe that a lot of strategies can be morphed into better ones, and there are very few truly diversifying strategies. Even different asset classes are, in essence, a bet on economic growth.

  • 00:55:00 The panel discusses the challenges of finding strategies that truly diversify and benefit from financial uncertainty and volatility. They mention that most classic hedge fund strategies fail in this regard. They also discuss the advice they would give to aspiring quants, including the importance of thinking in templates and being willing to kill your own “babies” or strategies that don’t work. They suggest that retail investors should focus on low-cost diversified ETFs and prioritize risk management.

  • 01:00:00 The speakers discussed the efficiency of the financial markets and how difficult it can be for individual investors to compete against professionals. They used a sports analogy to explain that trying to trade against major financial indexes is like playing against the best athletes in the world and is therefore extremely challenging. They recommended that investors use academic research papers as inspiration rather than taking them as gospel and try to find ideas that are not mainstream in order to avoid being too correlated with the broader market.

  • 01:05:00 The panelists discuss the validity of technical analysis in quantitative investing. Although technical analysis has been around for hundreds of years and is still widely followed, there is little support for it from an institutional perspective, and it is viewed as very discretionary and often untested. One panelist recommends trend following as a more robust and quantitative approach, cautioning against relying on folklore such as RSI and magdi. The panelists recommend their Twitter handles, LinkedIn profiles, and websites for those interested in their work.
What impacts a Quant Strategy? [Panel Discussion] - Sep 24, 2020
What impacts a Quant Strategy? [Panel Discussion] - Sep 24, 2020
  • 2020.09.25
  • www.youtube.com
Compared to discretionary choices that an old-school trader/investor makes, quant trading is based on, ostensibly, more objective criteria. Are they systemat...
 

Trading with Deep Reinforcement Learning | Dr Thomas Starke



Trading with Deep Reinforcement Learning | Dr Thomas Starke

Dr. Thomas Starke, an expert in deep reinforcement learning for trading, introduces the concept of reinforcement learning (RL) and its application in the trading domain. Reinforcement learning allows machines to learn how to perform a task without explicit supervision by determining the best actions to take in order to maximize favorable outcomes. He uses the example of a machine learning to play a computer game, where it progresses through different steps while responding to visual cues on the screen. The machine's success or failure is determined by the decisions it made throughout the game.

Dr. Starke dives into the specifics of trading with deep reinforcement learning by discussing the Markov decision process. In this process, each state corresponds to a particular market parameter, and an action taken transitions the process to the next state. Depending on the transition, the agent (the machine) receives a positive or negative reward. The objective is to maximize the expected reward given a certain policy and state. In the context of trading, market parameters help identify the current state, enabling the agent to make informed decisions about which actions to take.

The decision-making process in trading involves determining whether to buy, sell, or hold positions based on various indicators that inform the state of the system. The ultimate goal is to receive the best possible reward, which is the profit or loss resulting from the trade. Dr. Starke notes that traditional machine learning approaches assign specific labels to states, such as immediate profit or loss. However, this can lead to incorrect labels if a trade temporarily goes against expectations. The machine needs to understand when to stay in a trade even if it initially incurs losses, having the conviction to wait until the trade reverts back to the average line before exiting.

To address the difficulty of labeling every step in a trade's profit and loss, Dr. Starke introduces retroactive labeling in reinforcement learning. Traditional machine learning labels every step in a trade, making it challenging to predict whether a trade may become profitable in the future despite initial losses. Retroactive labeling utilizes the Bellman equation to assign a non-zero value to each action and state, even if it doesn't yield immediate profit. This approach allows for the possibility of reversion to the mean and eventual profitability.

Delayed gratification is a key challenge in trading, and Dr. Starke explains how reinforcement learning helps overcome this hurdle. The Bellman equation is used to calculate the reward of an action, incorporating both the immediate reward ("r") and the cumulative reward ("q"). The discount factor ("gamma") determines the weight given to future outcomes compared to previous ones. By leveraging reinforcement learning, trading decisions are not solely based on immediate rewards but also take into account the potential for higher future rewards. This approach enables more informed decision-making compared to purely greedy decision-making.

Deep reinforcement learning is particularly useful in trading due to the complexity of financial markets and the large number of states and influences to consider. Dr. Starke highlights the use of deep neural networks to approximate tables based on past experiences, eliminating the need for an enormous table. He emphasizes the importance of selecting inputs that have predictive value and testing the system for known behavior. The state in trading involves historical and current prices, technical guard data, alternative data sources like sentiment or satellite images, and more. Finding the right reward function and inputs to define the state is crucial. The constant updating of tables approximated by neural networks allows the machine to progressively learn and make better trading decisions.

Dr. Starke discusses how to structure the price series for training using reinforcement learning. Instead of sequentially running through the price series, one can randomly enter and exit at different points. The choice of method depends on the specific requirements and preferences of the user. He also delves into the challenge of designing a reward function, providing examples such as using pure percentage profit and loss (P&L), profit per tick, the Sharpe ratio, and various types of punishments to avoid prolonged drawdowns or excessive trade durations.

In terms of inputs, Dr. Starke suggests multiple options, including open, high, low, close, and volume values, candlestick patterns, technical indicators like the relative strength index, and various time-related factors. Inputs can also include prices and technical indicators of other instruments and alternative data sources like sentiment analysis or satellite images. These inputs are combined to construct a complex state, similar to how a computer game utilizes input features to make decisions. Finding the right reward function that aligns with one's trading style is critical, as it enables the optimization of the system accordingly.

The testing phase is an essential step for reinforcement learning in trading. Dr. Starke explains the series of tests he conducts, including clean sine waves, trend curves, randomized series with no structure, different types of order correlations, noise in clean test curves, and recurring patterns. These tests help evaluate whether the machine consistently generates profits and identify any flaws in the coding. He also discusses the use of different types of neural networks, such as standard, convolutional, and long short-term memory (LSTM) networks. Dr. Starke prefers simpler neural networks that suffice for his needs and do not require excessive computational effort.

Dr. Starke acknowledges the challenges of trading with reinforcement learning, such as distinguishing between signal and noise and the issue of local minima. Reinforcement learning struggles with noisy financial time series and dynamic financial systems characterized by changing rules and market regimes. However, he demonstrates that smoothing the price curve with a simple moving average can significantly enhance the performance of the reinforcement learning machine. This insight offers guidance on building a successful machine learning system capable of making profitable trading decisions.

Regarding audience questions, Dr. Starke provides further insights. He confirms that the Bellman equation avoids introducing look-ahead bias, and technical indicators can be used as inputs after careful analysis. He suggests that satellite images could be valuable for predicting stock prices. In terms of time frames, reinforcement trading can be applied to small time frames depending on the computational time of the neural network. He discusses the sensitivity of reinforcement trading algorithms to market anomalies and explains why training random decision trees using reinforcement learning does not make sense.

When asked about the choice of neural networks, Dr. Starke recommends using neural networks for trading instead of decision trees or support vector machines due to their suitability for the problem. Tuning the loss function based on the reward function is essential for optimal performance. He acknowledges that some attempts have been made to use reinforcement learning for high-frequency trading, but slow neural networks lacking responsiveness in real-time markets have been a limitation. Dr. Starke emphasizes the importance of gaining market knowledge to pursue a trading career successfully, making actual trades, and learning extensively throughout the process. Finally, he discusses the challenges associated with combining neural networks and options trading.

Dr. Starke also addresses the use of options data as an input for trading the underlying instrument, rather than solely relying on technical indicators. He offers insights on using neural networks to determine the number of lots to buy or sell and incorporating factors like spread, commission, and slippage into the algorithm by building a slippage model and integrating these factors into the reward function. He advises caution when using neural networks to decide trade volumes and suggests using output values to adjust portfolio weights accordingly. He concludes by expressing gratitude for the audience's questions and attendance at his talk, inviting further engagement and interaction through LinkedIn.

During the presentation, Dr. Starke emphasized the importance of continuous learning and improvement in the field of trading with reinforcement learning. He highlighted the need to constantly update the neural networks and refine the system based on new data and market conditions. This iterative process allows the machine to adapt to changing dynamics and enhance its decision-making capabilities over time.

Dr. Starke also discussed the concept of model validation and the significance of out-of-sample testing. It is crucial to evaluate the performance of the trained model on unseen data to ensure that it generalizes well and is not overfitting to specific market conditions. Out-of-sample testing helps validate the robustness of the system and provides a more realistic assessment of its performance.

Additionally, he touched upon the challenges of data preprocessing and feature engineering in trading with reinforcement learning. Preparing the data in a suitable format and selecting informative features are critical steps in building an effective trading model. Dr. Starke suggested exploring various techniques such as normalization, scaling, and feature selection to optimize the input data for the neural networks.

Furthermore, Dr. Starke acknowledged the limitations of reinforcement learning and its susceptibility to market anomalies or extreme events. While reinforcement learning can offer valuable insights and generate profitable strategies, it is important to exercise caution and understand the inherent risks involved in trading. Risk management and diversification strategies play a crucial role in mitigating potential losses and ensuring long-term success.

In conclusion, Dr. Starke's presentation provided a comprehensive overview of the application of reinforcement learning in trading. He discussed the key concepts, challenges, and best practices associated with using deep reinforcement learning algorithms to make informed trading decisions. By leveraging the power of neural networks and the principles of reinforcement learning, traders can enhance their strategies and potentially achieve better performance in dynamic and complex financial markets.

  • 00:00:00 Dr. Thomas Starke introduces deep reinforcement learning for trading, a topic that he has been interested in for several years. Reinforcement learning (RL) is a technique that allows a machine to solve a task without supervision, and it learns by itself what to do to produce favorable outcomes. He explains how a machine that wants to learn how to play a computer game would start in a gaming scenario and move from one step to the next while responding to what it sees on the screen. Finally, the game ends, and the machine achieves success or failure based on the chain of decisions it made.

  • 00:05:00 Dr. Thomas Starke discusses trading with deep reinforcement learning and explains the concept of a Markov decision process. In this process, a state is associated with a particular market parameter, and an action transitions the process from one state to the next. Depending on the transition, the agent either receives a positive or negative reward. The objective is to maximize the expected reward given a certain policy and state. In trading, market parameters are used to identify what state the agent is in and help it make decisions on what action to take.

  • 00:10:00 Dr. Thomas Starke discusses the decision-making process involved in trading, which involves deciding whether to buy, sell, or hold based on various indicators that inform the state of the system. The goal is to receive the best possible reward, which is the profit or loss of the trade. However, the traditional machine learning approach of giving a state a particular label, such as immediate profit or loss, can lead to incorrect labels if the trade goes against us in the immediate future. Therefore, the machine must understand when to stay in the trade even if it initially goes against us and have the conviction to wait until the trade reverts back to the average line to exit the trade.

  • 00:15:00 Dr. Thomas Starke discusses retroactive labeling and how it is used in reinforcement learning to address the difficulty of labeling every step in a trade's profit and loss. He explains that traditional machine learning labels every step in the trade, making it difficult to predict whether the trade may become profitable in the future if it experiences a loss. Retroactive labeling uses the Bellman equation to assign a non-zero value to each action and state, even if it does not produce immediate profit, allowing for a reversion to the mean and eventual profit.

  • 00:20:00 Dr. Thomas Starke explains how to use reinforcement learning to solve the problem of delayed gratification in trading. The Bellman equation is used to calculate the reward of an action, with "r" representing immediate reward and "q" representing cumulative reward. Gamma is a discount factor that assigns weight to future outcomes compared to previous outcomes. By using reinforcement learning, trading decisions are not solely based on immediate rewards but also on holding positions for higher future rewards. This allows for more informed decision-making compared to greedy decision-making.

  • 00:25:00 Dr. Thomas Starke discusses how deep reinforcement learning can help in making decisions for trading based on future outcomes. Traditional reinforcement learning involves building tables based on past experiences, but in trading, this becomes complex due to the large amount of states and influences. Therefore, the solution is to use deep reinforcement learning and neural networks to approximate these tables without creating an enormous table. He explains the implementation of using gamification of trading and finding the right reward function and inputs to define the state. Overall, the use of deep reinforcement learning can help in decision-making for trading.

  • 00:30:00 Dr. Starke discusses the importance of inputs in trading and how they need to have some sort of predictive value, or else the system won't be able to make good trading decisions. He emphasizes the need to test the system for known behavior and choose the appropriate type, size, and cost function of the neural network, dependent on the reward function chosen. He then explains how gamification works in trading, where the state is historical and current prices, technical guard data, and alternative data sources, and the reward is the P&L of the trade. The reinforcement learner will use the Bellman equation to label observations retroactively, and through constant updating of tables approximated by neural networks, the machine will learn to make better and better trading decisions.

  • 00:35:00 Dr. Thomas Starke discusses how to structure the price series for training using reinforcement learning. He explains that instead of running through the price series sequentially, you can randomly enter and exit at different points, and it's up to the user to decide which method to choose. He also discusses the difficulty of designing a reward function, and provides various examples and methods to structure a reward function that can be used for training, such as using pure percentage P&L, profit per tick, the Sharpe ratio, and different types of punishments to avoid long haul times or drawdowns.

  • 00:40:00 According to Dr. Thomas Starke, we have many options, including open high low close and volume values, candlestick patterns, technical indicators like the relative strength index, time of day/week/year, different time granularities, inputting prices and technical indicators for other instruments, and alternative data like sentiment or satellite images. These inputs are then constructed into a complex state, similar to how a computer game uses input features to make decisions. Ultimately, the key is to find the right reward function that works for your trading style and to optimize your system accordingly.

  • 00:45:00 Dr. Thomas Starke explains the testing phase that his reinforcement learner must undergo before being used to trade in the financial markets. He applies a series of tests including clean sine waves, trend curves, randomized series with no structure, different types of order correlations, noise in clean test curves, and recurring patterns to determine if the machine makes consistent profits and to find flaws in the coding. He also discusses the different types of neural networks he uses, including standard, convolutional, and long short term memory (LSTM), and his preference for simple neural networks, as they are sufficient for his needs and don't require excessive computational effort.

  • 00:50:00 Dr. Thomas Starke discusses the challenges of trading with reinforcement learning, including the difficulties of distinguishing between signal and noise and the problem of local minima. He shows that reinforcement learning struggles with noisy financial time series and dynamic financial systems with changing rules and market regimes. However, he also shows that smoothing the price curve with a simple moving average can significantly improve the performance of the reinforcement learning machine, providing insight into how to build a successful machine learning system that can make profitable trading decisions.

  • 00:55:00 Dr. Thomas Starke discusses the challenges of using reinforcement learning for trading. Firstly, reinforcement learning struggles to adapt to changes in market behavior, making it challenging to learn new behaviors. Additionally, a lot of training data is needed, but market data is often sparse. While reinforcement learning is efficient, it can overfit easily and only really acts on basic market patterns. Building more complex neural networks can overcome this, but it's a time-consuming task. Ultimately, reinforcement learning is not a silver bullet for producing profitable outcomes, and it's important to have good market experience and domain-specific knowledge to achieve successful trading outcomes. Dr. Starke offers a Quant NC lecture and encourages anyone interested in coding these systems to contact him on LinkedIn with well-formulated questions.

  • 01:00:00 Dr. Thomas Starke answers various questions related to trading with deep reinforcement learning. He explains that the Bellman equation does not introduce look-ahead bias, and technical indicators can sometimes be used as inputs after careful analysis. Satellite images could be useful for predicting stock prices, and reinforcement trading can be done on small time frames depending on neural network calculation time. He also discusses how sensitive reinforcement trading algos are to market anomalies, and explains why it doesn't make sense to train random decision trees using reinforcement learning.

  • 01:05:00 Dr. Thomas Starke recommends using neural networks for trading rather than decision trees or support vector machines due to their suitability for the problem. He explains that tuning the loss function based on the reward function used is essential. He mentions that people have tried to use reinforcement learning for high-frequency trading but ended up with slow neural networks that lacked responsiveness in real-time markets. He suggests that gaining market knowledge will significantly help pursue a trading career in the finance industry, making actual trades, and learning a lot in the process. Finally, he discusses whether one can use neural networks to get good results with options trading and explains the challenges of combining neural networks and options trading.

  • 01:10:00 Dr. Thomas Starke discusses how options data can be used as an input for trading the underlying instrument, as opposed to just relying on technical indicators. He also answers questions about using neural networks to decide the number of lots to buy or sell and how to incorporate spread, commission, and slippage into the algorithm by building a model for slippage and incorporating those factors into the reward function. He advises caution when using neural networks to decide on trade volumes and recommends using output values to size portfolio weights accordingly. He concludes by thanking the audience for their questions and for attending his talk.
Trading with Deep Reinforcement Learning | Dr Thomas Starke
Trading with Deep Reinforcement Learning | Dr Thomas Starke
  • 2020.09.23
  • www.youtube.com
Dr. Thomas Starke Speaks on Trading with Deep Reinforcement Learning (DRL). DRL has successfully beaten the reigning world champion of the world's hardest bo...
 

EPAT Sneak Peek Lecture - How to Optimize a Trading Strategy? - Feb 27, 2020



EPAT Sneak Peek Lecture - How to Optimize a Trading Strategy? - Feb 27, 2020

In the video, the speaker begins by providing background information on Content C and introducing their experience in trading and banking. They discuss the different methodologies in trading, including systematic trading, quantitative trading, algorithmic trading, and high-frequency trading. The main focus of the video is to provide insights into developing and optimizing a trading strategy in a quantifiable manner and to compare discretionary and quantitative trading approaches.

The speaker emphasizes the importance of outperformance and the hit ratio in trading. They explain that to achieve outperformance in at least 50% of stocks with a 95% probability, traders must be correct in their predictions a certain number of times, which increases with the number of assets being tracked and traded. Systematic trading, which allows for tracking more stocks, has an advantage over discretionary trading in this regard. However, discretionary trading can provide deeper proprietary insights by tracking fewer stocks. The speaker introduces the fundamental law of investment management, which states that the performance of an investment manager over the benchmark is directly proportional to their hit ratio and the square root of the number of bets taken.

Different types of traders, such as technical traders, fundamental traders, and quants, capture risk and returns in different ways. The speaker explains that almost all these trading approaches can be expressed as rules, making systematic trading possible. A trading strategy is defined as a mathematical set of rules that determines when to buy, sell, or hold, regardless of the market phase. The goal of a trading strategy is to generate a signal function based on incoming data and convert it into a target position for the underlying asset. While trading is complex due to market randomness and stochastic nature, rule-based strategies can help manage risk.

The speaker delves into the functions involved in designing and implementing a trading strategy. They emphasize that the realized return in the actual market is beyond one's control and cannot be changed. Therefore, it is essential to optimize the function of Pi given some constraints by changing parameters to improve the strategy. The speaker outlines the stages of strategy development, including ideation, hypothesis testing, rule conversion, backtesting, risk estimation, deployment, and the importance of seeking the next strategy after deployment.

Equations for return on investment in a trading strategy are explained, considering factors such as alpha, beta, and epsilon. The speaker also discusses risk and panels in a strategy, explaining how idiosyncratic risk can be diversified and is not part of the expected return. The concepts of beta and alpha are introduced, with passive broad-based indexing suggested for market factor exposure and the potential for further diversification through buying factors like value or momentum. Creating alpha is recognized as a challenging task that requires careful selection or timing.

The speaker highlights the importance of alpha and market timing in trading strategies. They explain that an effective strategy requires capturing constant alpha and predicting changes in market factors. If one lacks this ability, passive investing becomes the only viable option. The speaker advises starting the development of a simple trading strategy with ideation and careful observation before proceeding to backtesting. Deep dives into potential ideas using daily prices are recommended to gain initial insights.

A demonstration is provided on how to optimize a trading strategy using coding and data analysis techniques. The example uses Microsoft, Apple, and Google stocks to compute trading signals and approximate the subsequent value sell-off based on the opening and today's close. Exploratory analysis is conducted through plotting graphs to visualize differences in price movements. Data standardization is discussed to make the value of X comparable across different stocks, considering factors such as volatilities, prices, and percentage of volatility. The speaker highlights the statistical phenomenon related to gap up and gap down in the Indian market's large-cap reliance stock and S&P top 20 indices, leading to the definition of opening range and closing bar.

The speaker then moves on to discuss the benefits of the EPAT (Executive Programme in Algorithmic Trading) program for traders and individuals interested in pursuing a career in trading. They emphasize that the EPAT program is a practical program focused on trading, making it suitable for those who aspire to become traders or work on brokerage trading desks. The program provides a comprehensive understanding of trading strategies, risk management techniques, and the practical aspects of algorithmic trading.

In contrast to programs that focus more on theoretical aspects, the EPAT program offers practical knowledge that can be directly applied in real-world trading scenarios. The speaker encourages individuals who aim to become risk quants to explore other programs that delve deeper into theoretical concepts.

When asked about statistics topics essential for trading, the speaker recommends referring to any college-level statistics book to gain insights into applying statistics in trading. They also suggest following quantitative finance blogs and Twitter accounts to access valuable learning materials and stay updated with the latest trends and developments in the field.

Regarding strategy development, the speaker emphasizes the importance of thinking in terms of statistics and quantification to translate trading ideas into code. The EPAT program equips traders with the necessary skills to define good and profitable trading strategies. They stress the need to put effort into strategy development and acknowledge that making consistent profits in algo trading requires dedication and perseverance.

The speaker addresses specific questions from the audience, providing guidance on topics such as defining local lows and highs in code, obtaining and using code for option trading, and finding sample code. They mention that code samples can be found on GitHub and clarify that the EPAT program includes components of trading strategies, but they are unsure if position sizing is covered.

Moving on, the speaker discusses the application of algo trading in simple option strategies like iron condors. They highlight the significance of execution speed in high-frequency trading, where execution timing plays a crucial role. However, for medium to long-term strategies, alpha sources are more important than speed. Algo trading can be particularly useful in monitoring multiple options on different stocks to ensure that no potential trades are missed.

The speaker shares their perspective on the use of alternative data in trading strategies. They express mixed emotions about its effectiveness, pointing out that while some alternative data can be valuable, not all data sources yield useful insights. The decision to incorporate outliers in trading strategies depends on the specific trading and risk profiles of the strategy being employed.

Adaptive strategies are also discussed, which have the ability to optimize themselves based on changing market conditions. The speaker highlights various techniques for creating adaptive strategies and emphasizes their potential to enhance trading performance and adaptability.

In conclusion, the speaker reiterates that while building trading strategies based on various types of charts is possible, it is essential to have specific rules in place to ensure success. They caution that there are no "free lunches" in the market and emphasize the importance of a disciplined and systematic approach to trading decisions.

The video ends with an invitation to viewers to ask any additional questions they may have about the EPAT program or its potential benefits for their careers and businesses. Interested individuals are encouraged to connect with program counselors to inquire about admission details and fee flexibility through the provided forum or other communication channels.

  • 00:00:00 The speaker introduces the background of Content C and provides a brief about the speaker's experience in trading and banking. The speaker explains the differences between various trading methodologies like systematic trading, quantitative trading, algorithmic trading, and high-frequency trading. The main focus of this video is to provide a sneak peek into developing and optimizing a trading strategy in a typical quant way and a comparison between discretionary and quantitative trading.

  • 00:05:00 The speaker discusses the importance of outperformance and the hit ratio in trading. To achieve outperformance in at least 50% of stocks with a 95% probability, traders must be correct in their predictions a certain number of times. The number increases with the number of assets being tracked and traded. Therefore, systematic trading, which allows for tracking more stocks, has an edge over discretionary trading. However, discretionary trading can offer deeper proprietary insights due to tracking fewer stocks. The speaker also introduces the fundamental law of investment management, which states that the performance of an investment manager over the benchmark is directly proportional to their hit ratio and the square root of the number of bets taken.

  • 00:10:00 The speaker explains that different kinds of traders capture risk and returns in different ways such as technical traders, fundamental traders, and quants. He mentions that almost all of these different types of trading can be expressed as a rule, making systematic trading possible. The definition of a trading strategy is given as a mathematical set of rules that tells you when to buy, sell, or hold, no matter the phase the market is in. The goal of a trading strategy is to generate a signal function based on incoming data and convert it into a target position for the underlying asset. The speaker notes that trading is complex given the randomness and stochastic nature of the market, but creating rule-based strategies can help manage risk.

  • 00:15:00 The lecturer starts by explaining the different functions involved in designing and implementing a trading strategy. He emphasizes that the realized return in the actual market is outside your control and cannot be changed, which is why it's essential to optimize the function of Pi given some constraint by changing is NP or the parameters of s and P. The lecture then moves on to discuss the different stages of a strategy development, beginning with ideation, which leads to a verifiable hypothesis. The hypothesis is then tested by converting the rules into programmable rules, followed by backtesting to see whether the rules generate profit or fail. The outcome of this testing phase is the estimation of risk and P&L profile, after which the strategy is deployed while taking care of risks that are not captured in the testing phase. Finally, the lecturer highlights the importance of looking for the next strategy after deployment.

  • 00:20:00 The speaker explains the equations for the return on investment in a trading strategy, which includes factors like alpha, beta, and epsilon. He goes on to discuss the risk and panels in a strategy and explains how idiosyncratic risk can be diversified and is not part of the expected return. He also explains the concepts of beta and alpha, and suggests passive broad-based indexing if the only factor is market, while buying factors such as value or momentum can help diversify further. Finally, the speaker notes that creating alpha is not easy, and requires careful selection or timing.

  • 00:25:00 The speaker discusses the importance of alpha and market timing in trading strategies. The speaker explains that an effective trading strategy requires the ability to capture constant alpha and predict changes in the market factors. If one does not have the ability to do so, the only option is passive investing. The speaker then goes on to discuss how to develop a simple trading strategy by starting with ideation and making observations without jumping straight into backtesting. Instead, the speaker recommends doing a deep dive into each potential idea and using daily prices to get a quick idea before moving forward with more detailed testing.

  • 00:30:00 The speaker demonstrates how to optimize a trading strategy using a range of stocks in Microsoft, Apple, and Google. They use coding and data analysis techniques to compute trading signals and approximate the subsequent value sell-off based on the opening and today's close. The speaker explains that they are doing some exploratory analysis, mainly plotting some graphs to show the difference between today's open versus yesterday's low or high and the outcome they want to predict, which is today's close minus today's open. They then subset the data from 2008 to 2013 and plot a scatter plot to see how it works.

  • 00:35:00 The speaker discusses the standardization process to make the value of X comparable in different stocks, which have varying volatilities, prices, and percentage of volatility. The speaker standardized the data using the standard normal way, which ranges from -3 to +3. The speaker observed a statistical phenomenon related to gap up and gap down in the Indian market's large-cap reliance stock and S&P top 20 indices, which led to the definition of the opening range and closing bar. The signal function helps compute the gap between the opening range and closing bar, and the normalization of the calculated volatility of the stock helps determine whether the signal is positive or negative. When the signal is positive, the entry level becomes the high of the opening range candle, and when the signal is negative, the entry level is the low of this opening range candle, which helps determine the long or short position.

  • 00:40:00 The speaker discusses the position function and how to optimize a trading strategy using a platform called BlueShift. The position function is based on entering trades for stocks signals and allocating equal capital for all of them. The entry rule is limited to the first hour after the opening range and only the breached entry levels are entered. The exit rule is to square off positions of entered trades 30 minutes before the market closes. The platform BlueShift requires programming language familiarity with Python to deploy trading strategies, including technical indicators and quantitative strategies, to improve trading strategies.

  • 00:45:00 The speaker explains the process of creating a universe for trading using the BlueShift platform and the "symbol" function. The speaker then discussed how to compute the signal for the trading strategy by querying historical data for stock prices, extracting current and last bar prices including volatility, and normalizing the gap up and down using the volatility. The trading conditions for a bullish, bearish, or neutral phase were also explained. Additionally, the speaker outlined two small functions for turning off trading after a certain period and unwinding or squaring off positions before the market closes. Finally, the speaker described the looping process for creating signals and placing trades based on the bullish, bearish, or neutral mood and breaking opening ranges.

  • 00:50:00 The speaker discusses the process of optimizing a trading strategy. They explain that before starting the optimization process, it is important to estimate what the parameters of the strategy are, such as the signal threshold, volatility calculation day, position function, etc. The next step is to create an objective function that determines what the optimization should be based on - whether it is to maximize total returns or short pressure. The speaker suggests running a search by changing the parameters within a range to see what combination gives the maximum objective function. Many platforms offer this as a feature, using genetic algorithms or simulated annealing to speed up the optimization process.

  • 00:55:00 The speaker explains the scientific way of developing a strategy, which involves ideation, hypothesis testing, and evaluating; if something doesn't work, throw it away, and if it works, deploy it. The speaker cautions against using tools like parameter search to maximize the objective function, as it is essentially optimizing the strategy for the past, not the future. Instead, they suggest a research-based approach to figure out what went wrong and what can be improved, although it is challenging to generalize. Finally, the speaker proposes using a take profit target based on optionality and option theories to improve a trading strategy.

  • 01:00:00 The speaker discusses two improvements he made to a trading strategy. The first improvement involved implementing a gate profit strategy, which involves locking in profit when the move from the entry level to the current price is greater than or less than the profit target. The second improvement involved putting an upper bound on the signal generation thresholds, which increased the win rate of the strategy. The speaker also emphasizes the importance of the stability of the time series metric in generating consistent profit, and he suggests that non-linearity can put a downside on the strategy. Overall, the speaker demonstrates how incorporating theoretical insights can significantly improve trading performance.

  • 01:05:00 The lecturer discusses adding an upper threshold to the zero threshold for entering and putting in a lower threshold to avoid overstepping into the main reversion region of the signal versus outcome relationship. This helps to avoid non-linearity and leads to improved drawdown and performance. The lecturer also discusses the use of stop-loss as risk control rather than a signal mechanism and introduces the idea of using a sigmoid function for the position function. Using this function helps to avoid doing a large amount of trade into the zone where we are not sure whether the signal is positive or negative, leading to a significant improvement in performance. Overall, almost every metric looks great with the stability of the time series at 80 percent.

  • 01:10:00 The speaker discusses further optimization techniques for trading strategies such as adding filters for volatility and market direction, as well as implementing a switching mechanism to adapt to changing market conditions. The speaker also touches on the importance of risk control measures when going live with a strategy, including determining trading profiles, setting risk control parameters, and implementing limits on the maximum number of trades and maximum size per trade to avoid rogue trading. The section ends with a brief overview of how to go live with a trading strategy using a live training portal.

  • 01:15:00 The speaker explains the importance of implementing a strategy-wise approach rather than a trade-wise approach when optimizing a trading strategy, and emphasizes the need for a kill switch to stop a strategy. He shows how Blueshift allows users to do this through its selectable settings, which include automatic killing of a strategy when it reaches a certain loss percentage. The speaker also stresses the importance of ensuring there are no changes between the backtesting code and live trading code. He summarizes the process of going from 0.74 to a respectable 1.2 Sharpe ratio, with a focus on optimization, ideation, testing, and deployment phases. The speaker also answers questions related to the position function and Bitcoin derivatives, and directs users to resources on Github and YouTube for further learning.

  • 01:20:00 The speaker explains that EPAT is a practical program focused on trading, making it suitable for those interested in becoming a trader or working on a brokerage trading desk. On the other hand, those looking to become a risk quant should consider other programs that are more theoretical. When asked about statistics topics to know for trading, the speaker suggests picking up any college-level statistics book and developing insights into applying statistics for trading. They also recommend following quant blogs or Twitter accounts to find good materials. In terms of strategy, the speaker notes that even a profitable strategy may still lag behind inflation, but they believe the example strategy discussed in the lecture likely beat inflation. Additionally, the speaker notes that it's possible to create a strategy for a bear market.

  • 01:25:00 The video discusses various aspects of optimizing a trading strategy. The focus of creating strategies is on market neutrality, where the strategy has zero beta and is not affected by whether the market is in a bear market or bull market. The video goes on to explain how to fix a strategy that may not be doing well due to a wrong assumption, such as using an adaptive strategy or a bit of filter. Additionally, the video explains how this program helps traders define good and profitable strategies by teaching them how to think in terms of statistics and quantification to translate thoughts into code. Finally, the video explains that it is possible to become a successful individual algo trader in mid to low-frequency trading, but high-frequency trading requires a big institution.

  • 01:30:00 The speaker emphasizes the importance of strategy development and the effort required to make a profit in algo trading. Programming knowledge is beneficial, but not crucial; individuals with zero programming background have caught up with training. The critical skill sets are ownership of one's success and the ability to learn. The speaker addresses specific questions on defining local lows and highs in code, obtaining and using the code for option trading, and finding sample code. The code can be found on Github, and the speaker notes that the program includes parts of trading strategies but is unsure if position sizing is included.

  • 01:35:00 The speaker discusses the use of algo trading in simple option strategies such as iron condors, stating that execution is more important than the actual strategy in high-frequency trading due to the importance of speed. For medium to long-term strategies, alpha sources are more important than speed, but algo trading can still be useful for monitoring multiple options on different stocks to avoid missing trades. The speaker also discusses the use of alternative data, expressing mixed emotions about its effectiveness and stating that some alternative data is useful while others are not. The use of outliers in trading strategies depends on the trading profile and risk profile of the strategy. Lastly, the speaker mentions adaptive strategies, which can optimize themselves depending on market conditions and various techniques for creating these strategies.

  • 01:40:00 The speaker discusses the possibility of building trading strategies based on various types of charts, but cautions that there are no free lunches in the market and that specific rules must be in place to ensure success. The speaker also mentions that support is available for those looking to start their own trading task, but that the success of the algorithm in illiquid markets depends on the strategy being employed. The speaker advises that no asset classes are inherently better than others and cautions that gut feelings should not be the basis for trading decisions.

  • 01:45:00 The video discusses how the EPAT program can help traders optimize their trading strategy through learning various strategy paradigms. The program offers ten or more different paradigms to increase trading success and safety. Viewers are encouraged to ask any questions they may have about the program and its potential benefits for their careers and businesses. The video also mentions that interested individuals can connect with program counselors regarding admission and fee flexibility through the provided forum.
EPAT Sneak Peek Lecture - How to Optimize a Trading Strategy? - Feb 27, 2020
EPAT Sneak Peek Lecture - How to Optimize a Trading Strategy? - Feb 27, 2020
  • 2020.02.28
  • www.youtube.com
This EPAT Demo Lecture was conducted by Prodipta Ghosh (Vice President, QuantInsti) which explained how one could Optimize a Trading Strategy.We have receive...