Quantitative trading - page 21

 

High frequency trading strategies



High frequency trading strategies

Thank you for inviting me today to present my paper on high-frequency trading strategies. My name is Amy Kwan, and I'm from the University of Sydney. This paper is co-authored with Michael Goldstein from Babson College and Richard Phillip, also from the University of Sydney.

The purpose of this paper is to contribute to the ongoing debate among regulators, market participants, and academics regarding the impact of high-frequency trading (HFT) on financial markets. We have heard different perspectives on this matter, including the presentation by Sean and the discussion last night.

While there are varying opinions on HFT, some people, like Michael Lewis, the author of the book "Flash Boys," argue that the US stock market has become a class system based on speed, where the privileged few pay for nanoseconds of advantage while others remain unaware of the value of these tiny time intervals. On the other hand, proponents of HFT, like Ray Katsuyama, claim that HFTs can pick up trading signals and take advantage of regular investors.

Early academic evidence generally supported HFT and algorithmic trading, as it was believed to enhance liquidity and improve traditional market quality measures such as decreasing spreads, increasing depth, and lowering short-term volatility. However, more recent studies have found some negative aspects of HFT. For example, HFTs can anticipate order flow from other investors and extract rents from market forces.

Moreover, recent studies, such as those by Banker, Blending, and Courageous, and Canorkey, indicate that HFTs initially trade against the wind but then trade with the wind as a large trade progresses. To illustrate this, let's consider a scenario where a large pension fund wants to buy Apple stock. HFTs, upon detecting this trade, may compete with the institution to trade in the same direction, as they anticipate the future price increase due to the buying pressure.

Although there is some understanding of the effects of HFT, the literature remains unclear about how HFTs actually trade and influence financial markets. Most of the existing evidence is based on trade executions, and little is known about order submission behavior in Australia.

To address this gap, our study directly examines HFT trading strategies by analyzing the full limit order book data. We have access to detailed information on order submissions, amendments, cancellations, and trades for the top 100 stocks on the ASX. By classifying traders into HFT firms, institutional traders, and retail brokers, we aim to understand their behavior and the impact on market dynamics.

Our main findings reveal that HFTs excel at monitoring the order book and trading on imbalances. When there is a higher demand for buying or selling a stock, HFTs are more successful in capitalizing on this information compared to other trader categories. Additionally, we observe that HFTs provide liquidity on the fixed side of the order book, even when it is not needed, while non-HFTs suffer from limited access to the order book due to HFTs' strategic trading behavior.

We also examine the introduction of a faster data feed called "it" and find that HFTs become even more effective in their strategic trading after its implementation. However, non-HFT orders are crowded out from the limit order book, resulting in decreased chances of successful execution for these traders.

In conclusion, our study contributes to the understanding of HFT trading strategies by analyzing the full limit order book data. We find that HFTs outperform other trader categories in monitoring the order book and trading on imbalances. The introduction of a faster data feed further enhances their trading advantage. These findings shed light on how HFTs influence market dynamics and provide valuable insights for regulators, market participants, and academics.

Thank you again for the opportunity to present our research.

High frequency trading strategies
High frequency trading strategies
  • 2017.02.05
  • www.youtube.com
Speaker : Amy Kwan7th Emerging Markets Finance Conference, 201613th - 17th December 2016
 

Ciamac Moallemi: High-Frequency Trading and Market Microstructure



Ciamac Moallemi: High-Frequency Trading and Market Microstructure

Part of the purpose of my presentation is to familiarize people with the research conducted by faculty members. Before delving into the main topic, I would like to provide some background on my own work as an applied mathematician. Approximately half of my time is dedicated to exploring stochastic control problems, which involve making decisions over time in the presence of uncertainty. These abstract mathematical problems pose significant challenges but are fundamental, as many engineering and business problems share similar characteristics. The other half of my research focuses on the more applied aspect of stochastic control problems in the field of financial engineering.

Drawing from my previous experience as a hedge fund manager, I have a particular interest in optimal trading, market microstructure, and high-frequency trading in financial markets. Today, I will be discussing these topics to provide insights into the complexities of modern electronic markets. To appreciate the issues at hand, it is crucial to understand the main features of US equity markets, which have significantly evolved over the past five to ten years.

First and foremost, electronic trading dominates the market, rendering the traditional image of traders on the floor of the New York Stock Exchange largely irrelevant. Trading now primarily takes place on computers, with electronic trading being the primary mechanism for exchange. Another notable change is the decentralization or fragmentation of trading. In the past, a particular stock would predominantly trade on either Nasdaq or the New York Stock Exchange. However, there are now multiple exchanges, each accounting for a substantial percentage of equity trading.

These exchanges are organized as electronic limit order books, where market participants can submit buy and sell orders with specified prices. When prices intersect, trades are executed. This is in contrast to the historical dealer market or specialist market structure of the New York Stock Exchange. Additionally, around 30% of trades occur on alternative venues such as electronic crossing networks, dark pools, and internalization, further contributing to the decentralized nature of trading.

One of the most striking features of modern markets is the increasing automation of participants. Previously, a human trader would handle large orders, but now algorithms and high-frequency trading have taken over. Algorithmic trading allows investors to slice and dice large orders over time and across exchanges, while high-frequency traders, often categorized as market makers, provide liquidity. These recent trends have made the market more complex and have led to unpredictable interactions between algorithmic traders and high-frequency traders.

These developments have raised important questions at both the policy level and for individual participants. Policymakers and regulators need to evaluate the benefits and drawbacks of the current complex market structure. They must also address issues such as the occurrence of events like the famous flash crash of May 6, 2010, where market prices dropped significantly in a matter of minutes due to a pathological interaction between an algorithmic trader and high-frequency traders.

At the individual participant level, decision-making problems need to be addressed. Given the complexity and unpredictability of the market, participants must determine the most effective approach for their trading strategies. It is within this context that I have conducted research on two specific problems related to high-frequency trading and market microstructure: understanding the importance of latency and examining the role of dark pools in markets.

Latency refers to the delay between making a trading decision and its execution. The ability to trade quickly with low latency has become increasingly important. To assess the value and cost associated with latency, it is necessary to evaluate its significance in trading decisions. Over the years, latency in US equity markets has dramatically decreased, with trading now occurring in microseconds. This technological advancement has been driven by demand from high-frequency traders and others seeking faster execution.

Understanding the importance of latency raises further questions. Is low latency beneficial in making decisions with the latest information? Does being faster than competitors provide an advantage in capturing profits? Additionally, the rules and organization of exchanges often prioritize early entry, creating advantages for traders with lower latency connections. This raises concerns about fairness and equal access to market opportunities.

To address these questions, my research involves developing mathematical models that capture the dynamics of high-frequency trading and the impact of latency on trading strategies. By simulating different scenarios and analyzing the results, I aim to provide insights into the optimal balance between speed and accuracy in trading decisions. This research can help market participants, such as hedge funds or institutional investors, in designing their trading algorithms and infrastructure to maximize their performance in a highly competitive environment.

Another area of my research focuses on the role of dark pools in modern markets. Dark pools are private trading venues that allow participants to execute large trades anonymously, away from the public market. These alternative venues have gained popularity due to their potential to minimize market impact and improve execution quality for institutional investors with significant trading volumes.

However, the rise of dark pools has raised concerns about market transparency and fairness. Critics argue that the lack of transparency in these venues can create information asymmetry and negatively impact price discovery. Additionally, there have been instances where high-frequency traders exploit the lack of pre-trade transparency in dark pools for their own advantage.

In my research, I investigate the impact of dark pools on market liquidity, price formation, and the behavior of market participants. By developing mathematical models and conducting empirical analysis, I aim to understand the benefits and drawbacks associated with dark pool trading. This research can contribute to the ongoing debate about the regulation and oversight of dark pools and help market participants make informed decisions about their trading strategies.

In conclusion, my presentation today provides an overview of my research in the field of financial engineering, specifically focusing on high-frequency trading, market microstructure, latency, and dark pools. By delving into these topics, I aim to shed light on the complexities of modern electronic markets and the challenges they present for market participants and regulators. Through mathematical modeling, simulations, and empirical analysis, my research aims to provide valuable insights and contribute to the ongoing discussions and developments in the field of financial markets.

Furthermore, another aspect of my research revolves around the impact of regulatory policies on financial markets. Regulatory bodies play a crucial role in ensuring market integrity, stability, and investor protection. However, the design and implementation of regulations can have unintended consequences and affect market dynamics.

One area of focus in my research is the examination of market reactions to regulatory announcements. By analyzing historical data and conducting event studies, I investigate how market participants, such as traders and investors, adjust their strategies and positions in response to regulatory changes. This research helps in understanding the immediate and long-term effects of regulations on market liquidity, volatility, and overall efficiency.

Additionally, I explore the effectiveness of different regulatory measures in achieving their intended goals. For example, I study the impact of circuit breakers, which are mechanisms designed to temporarily halt trading during extreme market movements, on market stability. By analyzing historical market data and conducting simulations, I assess whether circuit breakers effectively prevent or exacerbate market crashes.

Another area of interest is the examination of regulations aimed at reducing systemic risk in financial markets. This involves analyzing the impact of measures such as capital requirements, stress tests, and restrictions on proprietary trading by banks. By studying the effects of these regulations on the stability of the financial system, I aim to provide insights into their effectiveness and potential unintended consequences.

Furthermore, I also explore the intersection of technology and regulation, particularly in the context of emerging technologies such as blockchain and cryptocurrencies. These technologies present unique challenges and opportunities for regulators, as they can disrupt traditional financial systems and introduce new risks. My research in this area focuses on understanding the regulatory implications of these technologies and exploring potential frameworks that can foster innovation while ensuring market integrity and investor protection.

My research in financial engineering encompasses a wide range of topics, including the impact of regulatory policies, market reactions to regulatory changes, and the intersection of technology and regulation. Through rigorous analysis, mathematical modeling, and empirical studies, I strive to provide valuable insights into the functioning of financial markets and contribute to the development of effective and well-informed regulatory frameworks.

Ciamac Moallemi: High-Frequency Trading and Market Microstructure
Ciamac Moallemi: High-Frequency Trading and Market Microstructure
  • 2012.11.19
  • www.youtube.com
On November 13, 2012, Ciamac Moallemi, Associate Professor of Decision, Risk, and Operations at Columbia Business School, presented High-Frequency Trading an...
 

Kent Daniel: Price Momentum



Kent Daniel: Price Momentum

I am pleased to be here and I would like to thank everyone for coming. It's great to see everyone so enthusiastic about this topic. Today, I will be discussing a specific quantitative strategy commonly used by hedge funds. This strategy is often implemented with significant leverage, and it complements the subjects that Professor Sunnah Reyes and Professor Wong have been addressing. My aim is to introduce the concept of quantitative investing and provide insights into this particular strategy.

Furthermore, I am conducting research on understanding the factors behind price momentum and the occurrence of this phenomenon in markets. I argue that the market is not entirely efficient, primarily due to imperfect information processing by investors. Thus, I will delve into the characterization of momentum and offer some thoughts on its underlying causes.

Recently, I came across an article in Bloomberg magazine featuring Cliff Asness, a notable figure in the industry. His firm has faced challenges in the past, mainly due to momentum. I find this particularly relevant to our discussion today. In fact, Asness and his company have not given up on momentum. They have even launched a mutual fund called the AQR Momentum Fund, in addition to their hedge fund endeavors.

AQR, both with their mutual funds and hedge funds, employs mathematical rules to construct diversified portfolios with a specific bias. In the case of momentum, they focus on investing in winners and selling losers. Today, I will explore this strategy in greater detail. However, before diving into the specifics, I want to share some insights from a research paper by Asness, Moskowitz, and Patterson. The paper investigates the presence of momentum across different asset classes.

According to their findings, momentum has historically performed well in various regions, including the United States, the United Kingdom, and continental Europe. However, it did not yield the same positive results in Japan. Additionally, the research explores momentum in equity country selection, bond country selection, foreign currency, and commodities, with varying degrees of success in each area.

So, what drives momentum? Based on my preliminary work and theories, the most compelling explanation revolves around information processing by investors. When investors receive new information, they tend to exhibit a status quo bias, assuming that things will remain relatively unchanged. While they anticipate some price movement in response to the information, they do not fully comprehend its impact. Consequently, the price moves slightly, but it takes time, often around a year, for the information to be fully reflected in prices.

In the context of financial markets, if you observe a price movement linked to information, it is likely that the momentum will continue. This persistence in price movement aligns with the concept of momentum in physics, where an object moving at a certain speed in a particular direction tends to keep moving unless an external force acts upon it.

Now, let's explore how to construct a momentum strategy. Suppose you want to implement a simple momentum strategy similar to AQR's approach. Here's a step-by-step guide: Starting at the beginning of a specific month, calculate the monthly returns of all stocks listed on NYSE, Amex, and NASDAQ over the past 12 months up until one month ago. Rank the stocks based on their returns and identify the top 10% as winners and the bottom 10% as losers. Construct a portfolio comprising the winners, weighted by their market capitalization. Similarly, establish a long-short portfolio by shorting $1 worth of the loser stocks. Rebalance the portfolio at the beginning of each month by updating the formation period returns and rankings.

This strategy results in a relatively low turnover portfolio since recent returns are likely to be similar. However, as you extend the timeframe to 12 months, the returns start to diverge significantly.

Now, let's assess the performance of this strategy from 1949 to 2007. Investing in T-bills average excess return of 16.5% per year, which is quite substantial. This indicates that the momentum strategy of buying winners and selling losers has been highly profitable over the long term.

Now, you might wonder if this excess return is consistent across different time periods. To examine this, let's break down the data into different decades and see how momentum performs. Here are the excess returns for each decade:

  • 1950s: 13.5%
  • 1960s: 14.7%
  • 1970s: 14.3%
  • 1980s: 13.7%
  • 1990s: 9.4%
  • 2000s: 13.1%

As you can see, momentum has delivered positive excess returns in every decade, although the magnitude varies. It's worth noting that the 1990s had a relatively lower excess return compared to other decades, but it was still positive.

So, why does momentum persist as a profitable strategy? One explanation is that investors tend to underreact to new information, causing prices to adjust slowly. As a result, stocks that have experienced positive returns continue to outperform because their prices have not fully reflected all available information. This delayed adjustment provides an opportunity for investors to capture profits by riding the momentum.

It's important to mention that while momentum has shown consistent profitability, it doesn't mean it's risk-free. Like any investment strategy, it comes with its own set of risks and challenges. Market conditions can change, and past performance is not a guarantee of future results. Therefore, thorough analysis, risk management, and ongoing monitoring are crucial when implementing a momentum-based investing approach.

In conclusion, the momentum strategy, which involves buying winners and selling losers, has historically generated significant excess returns in financial markets. Despite variations in returns across different decades, momentum has remained a profitable strategy overall. However, investors should exercise caution and consider various factors before incorporating this strategy into their investment approach.

Kent Daniel: Price Momentum
Kent Daniel: Price Momentum
  • 2011.07.15
  • www.youtube.com
On November 9, 2010, Kent Daniel, professor of Finance and Economics at Columbia Business School, presented Price Momentum. The presentation was part of the ...
 

Algorithmic Trading and Machine Learning



Algorithmic Trading and Machine Learning

Okay, thank you, Costas, for having me. I would also like to express my gratitude to Eric for his insightful talk, which provides valuable context for the discussion I will be presenting. Today, I will be focusing on the experiences of operating on the other side of these exchanges and dealing with high-frequency traders (HFTs) and other counterparties. I want to clarify that my talk will not explicitly cover game theory, as Costas assured me that it is acceptable. However, I will delve into practical aspects, drawing from my experience working with a quantitative trading group on Wall Street for the past 12 years.

First and foremost, I would like to extend special thanks to my trading partner, By Vaca, who is a co-author on all the work I will be discussing. Our research and insights have emerged from proprietary commercial contexts within our trading group. The aspects I will be highlighting are the non-proprietary elements that we find scientifically interesting over time.

Wall Street is undoubtedly an intriguing place, both technologically and socially. It has witnessed significant changes due to automation and the abundance of data. These transformations have given rise to numerous trading challenges, which necessitate a learning-based approach, particularly machine learning. With vast amounts of data available at a temporal and spatial scale beyond human comprehension, algorithms have become indispensable in trading. These algorithms need to be adaptive and trained on historical data, including recent data, to make sensible trading decisions.

In my presentation, I will outline three specific problem areas that arise in algorithmic trading within modern electronic markets. These vignettes or case studies shed light on the algorithmic challenges and offer hints on addressing them using new techniques.

The first two problems revolve around optimized execution. When executing a trade, whether buying or selling a specific volume of shares, there is a trade-off between immediacy and price. One can choose to execute the trade quickly, impacting prices but possibly capitalizing on fleeting informational advantages. On the other hand, a more leisurely approach can be taken, allowing the market to converge to the desired price over a longer duration. I will delve into these trade-offs and present specific instances that demonstrate the challenges faced in electronic markets.

The third problem pertains to algorithmic versions of classical portfolio optimization, such as mean-variance optimization. This involves holding a diversified portfolio that maximizes returns while considering risk or volatility. Although algorithmic in nature, this problem connects with traditional portfolio optimization approaches.

It is worth noting that the continuous double limit order auction, as described by Eric earlier, serves as the backdrop for these challenges. The image of the flash crash and the book by Michael Lewis on high-frequency trading underline the interesting and dynamic times we currently experience on Wall Street. While I do not intend to pass moral judgment on any trading activity, including high-frequency trading, I aim to elucidate the algorithmic challenges faced in modern electronic markets from the perspective of a quantitative trading group operating within a traditional statistical equities trading framework.

Our trading group specializes in trading equities, both long and short, encompassing a wide range of liquid instruments in domestic and international markets. To hedge our positions, we exclusively employ futures, avoiding complex derivatives. Despite trading in relatively simple markets and instruments, the rising automation and availability of data on Wall Street have introduced a multitude of trading problems that necessitate a learning approach, often employing machine learning.

By the way, I mean one example of this is that it is often observed that when one analyst upgrades their view on a stock, other analysts tend to upgrade their view on the same stock in quick succession. So, one needs to determine whether this is actually fresh news or simply a result of some other basic news entering the market. In such cases, it may not be advisable to trade based on this information.

Now, regarding your question about why we don't give up time for questions at the end and instead want to buy the remaining volume, there are two answers to this. Firstly, if we are a brokerage like Bank of America with an algorithmic trading desk, we execute trades based on the client's directive. They provide us with instructions on how many shares to buy within a specific timeframe. We don't ask for confirmation during the process. Secondly, we have optimized our strategies to determine the right volume to buy based on the available information. This volume is usually the maximum we can trade without significantly impacting the stock's price. While it is possible to implement the approach you suggested, we prefer to minimize the number of parameters involved to simplify decision-making in the complex world of trading.

Regarding the testing process, we conduct live testing on the six months following the study. This allows us to evaluate the model's performance in real-market conditions. However, the model itself uses historical data during the testing phase.

When it comes to explaining our policies to people, we primarily rely on an empirical approach rather than eyeballing. In this particular problem, it is clear what constitutes sensible behavior. The challenge arises when dealing with strategies that work well without a clear understanding of why they work. In such cases, we sometimes approach the problem from an anthropological perspective, trying to understand the reasons behind the consistent profitability of certain trades.

We acknowledge that the complexity of what we learn poses challenges in terms of interpretation. While we can identify consistent predictive power in certain state variables, understanding the underlying reasons at a granular level is extremely difficult. The microstructural nature of financial markets, especially in high-frequency trading, involves volumes and data speeds that surpass normal human comprehension. Therefore, we focus on careful training and testing methodologies to ensure consistent performance.

In our experiments, we have explored various features of the order book and their impact on performance. For example, incorporating the bid-ask spread into the state space has proven valuable for optimizing trade execution. However, not all features provide the same benefit, and some variables may even have negative effects on performance due to overfitting. By selecting the most informative features, we have achieved an additional 13 percent improvement on top of the 35 percent improvement achieved through control theoretic approaches.

We have been evaluating a solution experimentally, although I don't have the time to delve into the specifics right now. However, I can provide a simplified explanation of liquidity using a cartoon model. Different dark pools, which are alternative trading venues, exhibit varying liquidity properties at different times and for different stocks.

When a new exchange, whether it's a limit order book or a dark pool, emerges, it often tries to establish itself in the market by offering preferential treatment, rebates, or fees for a particular class of stocks. They promote themselves as the preferred dark pool for trading specific types of stocks. As a result, traders interested in those stocks are drawn to that specific dark pool, creating liquidity. In contrast, other dark pools may have different liquidity profiles and may not attract as much trading activity.

To visualize this concept, imagine each dark pool having a unique liquidity profile for a given stock, represented by a stationary probability distribution. The x-axis represents the number of shares, while the y-axis represents the probability of finding available shares for execution at each discrete time step. When we submit our trade order to a dark pool, a number (s) is drawn from this distribution, indicating the volume of counterparties willing to trade at that specific time step. The executed volume is determined by the minimum of the drawn volume (s) and the requested volume (vns), ensuring partial execution if necessary.

Now, you may wonder how the liquidity curve can be non-decreasing when partial execution occurs. The liquidity curve merely represents the likelihood of finding available volume within a certain range. It shows that smaller volumes are more likely to be available for execution, while larger volumes are less likely. Partial execution simply means that the executed volume is less than the requested volume, but it doesn't affect the overall shape of the liquidity curve.

The proliferation of dark pools is an interesting phenomenon. It raises questions about market equilibrium and the competition among these venues. It remains uncertain whether the market will eventually consolidate, leading to the dominance of a few dark pools. Similar dynamics have been observed in continuous double auctions since the deregulation of financial markets allowed multiple exchanges to operate simultaneously. The regulatory landscape and the ability of startups to propose new mechanisms contribute to the complexity of market structure.

Considering the connection between this research and Eric's paper, we can explore the interplay between different market structures, algorithms, and their impact on market stability and fragmentation. By simulating scenarios involving multiple players using similar algorithms, we can investigate the computational outcomes and study how market structure and algorithm diversity influence prices and other regulatory concerns. This combination of research efforts could yield valuable insights into the complex relationship between market structure, algorithmic trading, and market stability.

Furthermore, we can delve into more sophisticated questions, such as the interaction between different algorithms and market structures, and how they shape market dynamics. By examining various market scenarios, we can analyze the suitability of different market structures and algorithms for achieving stability and addressing fragmentation issues.

The evolution of financial markets has led to the automation of certain aspects, often replacing useful human elements. However, new electronic mechanisms have been introduced to replicate and enhance functionality. Understanding these dynamics and adapting our strategies accordingly allows us to navigate the complexities of modern financial markets.

My talk will shed light on the algorithmic challenges inherent in trading in modern electronic financial markets. The three case studies I will present highlight the complexities and trade-offs faced in optimized execution and algorithmic portfolio optimization. While time constraints may prevent me from fully covering all the topics, I hope to provide valuable insights into these areas.

While simulations and computational analyses offer avenues for understanding the potential outcomes of algorithmic trading, it is essential to strike a balance between abstract modeling and real-world relevance. The challenge lies in identifying which details are crucial and which can be safely overlooked without sacrificing practical relevance, especially in the complex and ever-evolving landscape of financial markets.

Algorithmic Trading and Machine Learning
Algorithmic Trading and Machine Learning
  • 2015.11.20
  • www.youtube.com
Michael Kearns, University of PennsylvaniaAlgorithmic Game Theory and Practicehttps://simons.berkeley.edu/talks/michael-kearns-2015-11-19
 

The Design of Financial Exchanges: Some Open Questions at the Intersection of Econ and CS



The Design of Financial Exchanges: Some Open Questions at the Intersection of Econ and CS

Thank you very much, Kostas. This talk is going to be a bit unconventional for me, but I hope it aligns with the spirit of this conference and the topic of open directions. It is connected to the design of financial exchanges, particularly the prevailing design known as the continuous limit order book. I will begin by discussing a paper I recently worked on with Peter Crampton and John Shimm, which highlights an economic flaw in the current financial exchange design. This flaw, we argue, contributes to the negative aspects of high-frequency trading.

The first part of the talk will cover this paper, which may be familiar to some of you but likely not to most. It presents an economic case for an alternative approach called discrete-time trading or frequent batch auctions. Our paper suggests that the continuous limit order book, while widely used worldwide, suffers from a structural flaw that leads to various issues associated with high-frequency trading. I will present a condensed and accessible version of this part, as it has been presented multiple times before.

The second and third parts of the talk will delve into open questions and research directions concerning the design of financial exchanges. These areas of inquiry lie at the intersection of economics and computer science. In the later sections, I will discuss a two-page portion in the back of the Quarterly Journal of Economics paper that presents a qualitative argument, devoid of theorems or data, for the computational benefits of discrete-time trading compared to the current market design. This discussion will raise numerous questions and aim to stimulate further exploration.

Although the latter parts of the talk are less formal than what I am accustomed to, I believe they are crucial in raising open questions and setting an agenda for future research. This aligns with the purpose of this conference, which encourages the exploration of economics and computer science intersections and suggests fruitful directions for future inquiry.

Now, let's delve into the economic case for discrete-time trading and its advantages over the continuous limit order book, which I will explain in more detail. The continuous limit order book is a market design that processes trillions of dollars in economic activity each day. It operates based on limit orders, which specify the price, quantity, and direction (buy or sell) of a security. Market participants can submit, cancel, or modify limit orders throughout the day, and these messages are sent to the exchange.

Trade occurs when a new request matches with existing orders in the limit order book. For instance, a buy request with a price equal to or higher than an outstanding sell offer would result in a trade. This is the basic functioning of the continuous limit order book.

However, our research suggests that this market design has inherent flaws. One major issue is what we call "sniping." When there is a change in public information or signals, trading firms engaged in liquidity provision adjust their quotes accordingly. They cancel their previous bids or asks and replace them with new ones reflecting the updated information. Now, suppose I am one of these trading firms adjusting my quotes. At the same time, others, like Thomas, also send messages to the exchange to trade at the old quotes before they are replaced.

Since the market processes these messages in continuous time and in a serial order, it becomes random which message reaches the exchange first. If multiple trading firms react to the new information simultaneously, there is a chance that a request from Thomas or any other participant is processed before mine, allowing them to trade at the old price. This phenomenon of sniping is problematic and creates several implications.

First, it enables mechanical arbitrage opportunities based on symmetric public information, which is not supposed to happen in an efficient market. Second, the profits from such arbitrage opportunities come at the expense of liquidity provision. As snipers successfully execute trades at old prices, liquidity provisioning trading firms become hesitant to adjust their quotes quickly. This hesitancy stems from the fear of being sniped and losing out on potential profits. Consequently, the market becomes less efficient as liquidity providers become less willing to update their quotes in response to new information.

Another issue with the continuous limit order book is the potential for order anticipation. In this scenario, traders observe the arrival of new limit orders and preemptively adjust their quotes in anticipation of future trades. This behavior can lead to a cascading effect, where traders constantly adjust their quotes in response to one another, creating unnecessary volatility and instability in the market.

To address these flaws, our paper proposes an alternative market design known as discrete-time trading or frequent batch auctions. In this design, rather than processing orders in continuous time, the market operates in discrete time intervals or batches. During each batch, market participants can submit their limit orders, and at the end of the batch, the market clears, and trades are executed at a single uniform price.

By introducing discrete-time trading, we eliminate the issues of sniping and order anticipation. Since all orders submitted within a batch are processed simultaneously, there is no randomness in the order execution. Traders can be confident that their orders will be executed at the same price as other participants within the same batch, ensuring fairness and reducing the incentive for sniping.

Moreover, frequent batch auctions promote stability and reduce unnecessary volatility in the market. Traders no longer need to constantly adjust their quotes in response to every incoming order. They can instead focus on analyzing information and making informed trading decisions, knowing that their orders will be executed at the end of the batch at a fair price.

Investing in financial markets often requires a certain amount of waiting time for transactions to take place. Different people may have different opinions on whether this waiting time is a significant or insignificant cost. For example, if you are slightly faster than me in executing trades, such as being a millionth of a second faster, it could give you an advantage in acting on news events within that time frame. On the other hand, I might miss the opportunity to act due to my slightly slower speed. This speed advantage is often measured by the ratio of the speed differential (Delta) to the batch interval (tau) in a continuous market.

In a discrete market, if you are slightly faster than me, you can always "snipe" me at a specific time interval (Delta over tau) because of the auction-based competition. However, if you and several other traders are all slightly faster than me, we would have to compete in an auction to trade with me instead of competing based on speed alone. This raises the question of whether different markets adopt this synchronized clock approach uniformly or if there are practical challenges involved.

It is important to note that in the current continuous market, the law of one price is constantly violated because price changes do not occur simultaneously across different exchanges. This violation is not easily detectable with human observation or the available research data. However, if multiple exchanges were to adopt frequent batch auctions simultaneously, it would be possible to detect violations of the law of one price more easily. This doesn't necessarily mean that one approach is better or worse, but rather that the data would provide clearer insights.

If a single exchange were to transition to a discrete market while others remained continuous, that exchange would eliminate latency arbitrage and remove a tax on liquidity provision. In an economic sense, this could give an advantage to the discrete market exchange over time. However, there are challenges to launching a new marketplace, regulatory ambiguities, and vested interests from existing exchanges that benefit from the current market design.

Regarding IEX's proposal to introduce latency into every order while maintaining a continuous-time exchange, it works by delaying both incoming and outgoing orders by a specific time interval. IEX monitors changes in the market within a fraction of a second and adjusts prices accordingly. However, a potential weakness in their design is that it relies on accessing price information from external sources. This raises questions about whether IEX's approach contributes to price discovery or simply relies on information from elsewhere.

On the other hand, introducing random delays to all orders may not effectively address sniping and can lead to infinite message traffic. While there have been several ideas proposed to address the problem, many of them have proven to be ineffective when analyzed. In contrast, our paper proposes making time discrete and batch processing as a solution to the flaw in market design, which creates rents from public information and encourages a speed race.

One aspect we discuss in the paper is the computational advantages of discrete-time trading. Modern financial markets have faced various computational issues, such as flash crashes and exchange glitches. Discrete time offers computational simplicity compared to continuous time and provides specific benefits for exchanges, algorithmic traders, and regulators.

For exchanges, continuous-time processing can lead to backlog issues, where algorithms are uncertain about the state of orders and the market during times of high activity. In contrast, discrete-time batch auctions can be processed more efficiently and provide a cushion of time relative to worst-case processing time. This reduces the uncertainty and backlog problems faced by exchanges.

Discrete time also simplifies message processing for exchanges, eliminating the need to prioritize the dissemination of different types of messages. This reduces the possibility of exploiting information asymmetry. Additionally, discrete time simplifies the programming environment for exchanges, potentially reducing the occurrence of glitches and improving overall system stability.

Another computational benefit of discrete-time trading is that it simplifies the analysis and modeling of algorithmic strategies. In continuous-time markets, algorithmic traders face the challenge of optimizing their response to incoming data in real-time. They need to make decisions quickly while taking into account the changing market conditions. This trade-off between speed and intelligence is a complex problem to solve.

However, in discrete-time trading, the batch processing of data allows algorithmic traders to have a fixed interval to analyze and make decisions. For example, if the batch interval is set to 100 milliseconds, traders have the luxury of dedicating the first 100 milliseconds to thorough analysis without the pressure of immediate execution. This can lead to more sophisticated and accurate decision-making processes.

Research questions arise from this computational advantage. How can algorithmic traders strike the right balance between speed and intelligence in their decision-making? Are there negative externalities associated with favoring speed over intelligence in the market? Does the discrete-time framework improve the accuracy of price formation compared to continuous-time trading?

For regulators, discrete-time trading offers the benefit of a cleaner paper trail. In continuous-time markets, the synchronization of clocks and the adjustment of timestamps can introduce complexities when reconstructing the sequence of events. It becomes challenging to determine the chronological order of actions across different markets. In contrast, discrete-time trading simplifies this process, making it easier to establish a clear and accurate record of market activity.

The potential benefits of a clean paper trail in discrete-time trading are an open question. Intuitively, a well-documented and easily traceable market activity can improve transparency and accountability. It may enhance market surveillance and help regulators identify and address manipulative or illegal trading practices more effectively.

Our research highlights the economic flaws in the prevailing continuous limit order book design and presents an alternative approach called discrete-time trading or frequent batch auctions. This alternative design addresses issues such as sniping and order anticipation, promoting fairness, stability, and efficiency in financial exchanges. By exploring these open questions and research directions, we aim to stimulate further investigation into the design of financial exchanges, bridging the fields of economics and computer science to enhance market functionality and performance.

Discrete-time trading offers several computational advantages over continuous-time trading. It simplifies message processing for exchanges, reduces computational bottlenecks, and allows for more sophisticated algorithmic strategies. It also provides a cleaner paper trail for regulators, enhancing market surveillance and transparency. However, further research is needed to explore the implications and potential drawbacks of discrete-time trading in practice.

The Design of Financial Exchanges: Some Open Questions at the Intersection of Econ and CS
The Design of Financial Exchanges: Some Open Questions at the Intersection of Econ and CS
  • 2015.11.20
  • www.youtube.com
Eric Budish, University of ChicagoAlgorithmic Game Theory and Practicehttps://simons.berkeley.edu/talks/eric-budish-2015-11-19
 

ChatGPT and Machine Learning in Trading



ChatGPT and Machine Learning in Trading

The presenter delves into the topic of utilizing natural language processing (NLP) models like ChatGPT in the trading industry, emphasizing their ability to analyze and understand text sources such as news articles, social media posts, and financial statements. Specifically, ChatGPT, a powerful language model, is well-suited for analyzing vast amounts of financial data and generating natural-sounding responses, enabling traders to engage in conversations about trading opportunities.

The finance community holds high expectations for ChatGPT, anticipating its contribution to the development and optimization of trading strategies. The presenter further elucidates the distinctions between artificial intelligence (AI), machine learning (ML), and deep learning, highlighting that machine learning is a subset of AI that employs techniques to teach machines to simulate human behavior and make intelligent decisions.

Moving on, the presenter discusses the typical workflow of ML in trading. They explain that ML enables machines to learn from data and make predictions, following a series of steps. Initially, data is collected and pre-processed to ensure its quality and relevance. Next, features are engineered to convert raw data into meaningful attributes that machines can comprehend. The data is then divided into training and test sets, and a model is constructed using ML algorithms. Finally, the model is tested on new data, and if it performs satisfactorily, it can be used for making predictions, facilitating the trading process.

To illustrate the application of ML, the presenter provides an example of predicting the high and low prices of an asset, such as gold, for the next trading day. This prediction can greatly assist traders in making informed decisions and improving their trading strategies.

Furthermore, the presenter explores how ChatGPT can serve as a valuable tool in solving trading problems, such as creating a linear regression model for predicting gold prices. They compare ChatGPT's approach to a more comprehensive quantitative approach, which involves data collection, cleaning, model creation, pipeline development, live trading, and continuous improvement. An example of a machine learning regression code notebook is shared, outlining the four key steps involved in solving the problem: data preparation, preprocessing, price prediction, and strategy and performance analysis. While ChatGPT can aid in idea generation, the presenter emphasizes the need for a nuanced understanding of each concept and careful consideration to avoid errors.

The limitations and risks associated with using ChatGPT in ML-based algorithmic trading are also addressed. The presenter highlights potential challenges, including the lack of domain expertise, limited training data, and interpretability issues. They caution against relying solely on ChatGPT for trading decisions and stress the importance of conducting accuracy checks across different financial periods.

Additionally, the presenter discusses the results of a poll conducted on ChatGPT's ability to generate code accurately. The majority of the audience (74%) correctly identifies that ChatGPT can provide reasonable accuracy but is not suitable for complex programming tasks requiring domain expertise. To illustrate the process, the presenter demonstrates how to split data into training and test sets using Python code generated by ChatGPT. They emphasize the correct sequencing of data, particularly in the context of time series data in trading.

Evaluation of ML-based trading algorithms through backtesting and strategy analytics is highlighted as a crucial step in assessing their performance. The presenter emphasizes the need for in-depth analysis using various metrics such as the Sharpe ratio, annualized returns, and volatility of returns to gain insights into trades and overall performance. A comparison between the returns of a trading algorithm and a buy-and-hold strategy is shown as an initial step in evaluating the algorithm's effectiveness.

Moreover, the presenter shares an example of a profitable trading strategy and emphasizes the significance of visualizing and analyzing data in the trading process. Strategy analytics, including annual returns and cumulative returns, are instrumental in evaluating the success of a strategy.

Shifting gears, the presenter addresses the limitations of using GPT for financial analysis in trading. The audience previously participated in a poll, with the majority expressing the opinion that reasonable accuracy requires fact-checking and that GPT may not be suitable for financial analysis. To illustrate this limitation, the presenter requests GPT to compare the yearly financial statements of Apple and Microsoft for 2020. However, GPT provides an inaccurate response, highlighting its limitations as a generator model that lacks domain expertise. The presenter underscores the importance of acquiring finance-related knowledge, reading books, and fact-checking before applying ML algorithms, such as GPT, to trading.

Recognizing the significance of domain-related knowledge in finance, the presenter suggests taking courses to gain expertise. This expertise enables traders to make better use of machine learning tools like ChatGPT. In support of this, the presenter offers free access to four notebooks from a trading with machine learning education course, allowing viewers to gain a deeper understanding of the code and its application.

During the Q&A session, one common question arises regarding ChatGPT's ability to keep up with daily changes in financial markets. The presenter clarifies that as a language model, ChatGPT's effectiveness is limited by the data it was trained on and does not update daily. Staying updated with the latest market data is essential for effectively utilizing ChatGPT or any machine learning model in finance.

The speakers address various other audience questions, providing helpful information. They inform the audience that the recorded session will be shared through email and on their YouTube channel for future reference. They also discuss the availability of a notebook for the next 24 hours and explain the concept of a pipeline in machine learning.

A specific question is raised regarding converting vectorized Python code into a format deployable in a live trading library. The speakers explain that while ChatGPT can assist in code conversion, defining event triggers is still necessary. Additionally, they mention that Chargeability 3.5 does not provide information for the year 2022.

To conclude, the speakers discuss a trading strategy that utilizes next day high and low predictions, which was optimized using machine learning techniques. They emphasize the applications of deep learning in trading, such as time series forecasting, portfolio optimization, and risk management. Deep learning, coupled with reinforcement learning, can enhance the performance of trading strategies by enabling agents to learn from mistakes through rewards and punishments.

The presenter emphasizes that domain expertise and intuition remain pivotal for reliable usage of machine learning in trading. While tools like ChatGPT can aid in analyzing historical data and assessing the probability of success in future trades, they should not be relied upon solely. The importance of acquiring domain-related knowledge, fact-checking, and continuously staying updated on the latest market data is stressed to ensure accurate and informed decision-making in the trading industry.

  • 00:00:00 ML algorithms can understand the trends and patterns in the market and then use that information to predict future market movements. To aid in this process, machine learning algorithms are often used, and this is where ChatGPT comes into play. ChatGPT is a natural language processing tool that can help traders analyze large amounts of financial data and provide insights into market trends. However, using ChatGPT does come with its own set of challenges and risks, which will be discussed later in the presentation. Overall, ML and ChatGPT have revolutionized the trading industry by allowing for more accurate predictions and better-informed decision-making.

  • 00:05:00 The speaker discusses the use of natural language processing (NLP) models like ChatGPT in the trading industry. These models are able to analyze and understand text sources such as news articles, social media posts, and financial statements. ChatGPT, a large language model, is particularly well-suited for analyzing such data and can generate natural-sounding responses to text prompts, making it possible to engage in conversations about trading opportunities. The finance community has high expectations for ChatGPT, as it is expected to help develop and optimize trading strategies. The speaker also explains the differences between artificial intelligence, machine learning, and deep learning, with machine learning being a collection of techniques used in AI to teach machines to simulate human behavior and make intelligent decisions.

  • 00:10:00 The speaker discusses how machine learning (ML) can be used for trading and describes the typical workflow of ML in trading. They explain that ML is a subset of artificial intelligence (AI) that enables machines to learn from data and make predictions. To apply ML, first, the data is collected and pre-processed, then features are engineered to convert raw data into attributes that a machine can understand. The data is then modified, split into training and test sets, and built into a model. Finally, the model is tested on new data, and if satisfactory, predictions can be made. The speaker later provides an example of using ML to predict the high and low of an asset such as gold for the next trading day, which can help ease the trading process.

  • 00:15:00 The speaker discusses how ChatGPT can be used as an assist in solving problems such as creating a linear regression model to predict gold prices for the next day. They compare ChatGPT's approach to a more professional quantitative approach, which includes collecting and cleaning data, creating models and pipelines, checking data AP, doing live trading, and deploying to production while continuously improving. They also show an example of a machine learning regression code notebook and explain the four parts of solving the problem: data preparation, preprocessing, predicting prices, and strategy and performance analysis. The speaker notes that while ChatGPT can be used for idea generation, it is important to understand each concept in detail and be nuanced in the approach to avoid mistakes. They also discuss the use of ChatGPT to generate code and launch a poll.

  • 00:20:00 The speaker discusses the audience's responses to a poll regarding ChargeGPT's ability to generate code accurately. The majority of the audience (74%) correctly choose that ChargeGPT can provide reasonable accuracy but is not suitable for complex programming tasks that require domain expertise. The speaker then proceeds to demonstrate how to split data into a train and test set using Python code generated by ChargeGPT and shows how the data needs to be correctly sequenced for time series data in trading.

  • 00:25:00 The speaker discusses the importance of evaluating the performance of a machine learning-based trading algorithm by doing backtesting and strategy analytics. They explain that this requires detailed analysis of the performance of the strategy and the use of various metrics such as the sharp ratio, annualized returns, and volatility of returns to gain insight into trades and performance. The speaker also shows an example of how to compare the return of a trading algorithm with a buy-and-hold strategy as the first step in understanding whether the trading algorithm is performing well.

  • 00:30:00 The speaker discusses the results of applying machine learning algorithms to trading strategies and emphasizes the importance of visualizing and analyzing the data. They present a profitable strategy and highlight the need for strategy analytics, such as annual return and cumulative returns. The speaker then moves on to the challenges and risks associated with using ChatGPT for ML-based algorithm trading, noting limitations such as lack of domain expertise, limited training data, and interpretability issues. They caution against relying solely on ChatGPT for trading decisions and highlight the importance of accuracy checks for different financial periods.

  • 00:35:00 The speaker discusses the limitations of using the language model GPT for financial analysis in trading. The audience previously participated in a poll and the majority believed that reasonable accuracy requires fact-checking and that GPT may not be suitable for financial analysis. The speaker demonstrates this limitation by asking GPT to compare the yearly financial statements of Apple and Microsoft for 2020, which resulted in an inaccurate response. The speaker emphasizes that GPT is a generator model and lacks domain expertise, which could lead to wrong conclusions or suggestions. Therefore, the speaker recommends reading more books, gaining finance-related knowledge, and fact-checking before applying ML algorithms for trading.

  • 00:40:00 The presenter emphasizes the importance of having domain-related knowledge when applying machine learning in finance. He suggests taking courses to gain this expertise, which can enable better use of machine learning tools such as ChatGPT. The presenter also provides free access to four notebooks from a trading with machine learning education course for viewers to better understand the code. During the Q&A session, a common question was raised about whether ChatGPT can keep up with the daily changes in financial markets. The presenter clarifies that as a language model, it is only as good as the data it is trained on and is not updated daily. For effective use of ChatGPT or any machine learning model in finance, it is essential to stay updated on the latest market data.

  • 00:45:00 The speakers address various questions from the audience. They explain that the recorded session will be shared through email and on their YouTube channel. They also discuss the availability of a notebook for the next 24 hours and the definition of a pipeline in machine learning. The speakers respond to a question about converting vectorized Python code into one that can be deployed into a library for live trading. It is explained that while Charge can help convert the code, it is still necessary to define event triggers. The speakers also mention that Chargeability 3.5 does not provide information for 2022. Finally, the speakers talk about a trading strategy utilizing next day high and lows and how it was optimized using machine learning.

  • 00:50:00 The speaker explains the applications of deep learning in trading, including time series forecasting, optimizing portfolios, and risk management. They describe how deep learning creates certain agents that learn from mistakes through rewards and punishments, and how a combination of deep learning and reinforcement learning can be used to improve the performance of trading strategies. The speaker emphasizes that the key to reliably using machine learning in trading is domain expertise and intuition, and that tools like ChatGPT can be used to analyze historical data and provide insights on the probability of success in future trades.

  • 00:55:00 The speaker explains that using chargeability alone may not be the best approach to determine the riskiness of trades, as it requires a deep understanding of the domain itself. It is important to gain knowledge and understanding of the domain before relying on any tool or code to solve the problem. They also mention the difference between two trading courses and answer a question about converting trading platform-specific code to Python. While chargeability may help in converting generic programming language, it may not be helpful in platform-specific code conversions.
ChatGPT and Machine Learning in Trading
ChatGPT and Machine Learning in Trading
  • 2023.03.22
  • www.youtube.com
This session discusses the basics, uses & needs of ChatGPT and machine learning in trading. Attendees will learn how to integrate ChatGPT and machine learnin...
 

Understanding Financial Market Behaviour: The role of multiple categories of data



Understanding Financial Market Behaviour: The role of multiple categories of data

The host begins the webinar by introducing the topic of understanding financial market behavior and the role of multiple categories of data. The panelists, including Professor Gotham Mitra, Dr. Ernest Chan, and Dr. Mateo Campoloni, are introduced as experts with extensive experience in trading and academic careers. The webinar aims to explore how data from various categories plays a crucial role in understanding and predicting financial market behavior, a topic that has gained increasing importance in recent times. It is mentioned that the session is part of the certificate in Sentimentalysis and Alternative Data for Finance offered by Opticks Systems and QuantInsti.

The first speaker emphasizes the significance of data in comprehending financial market behavior. While early on, only limited data such as market prices, buy and sell orders, and the depth of the book were available, there is now a wide range of data categories to consider. These include news data, media sentiment data, and alternative data. Despite the efficient market hypothesis, which suggests that markets eventually incorporate all information, there are still short-term inefficiencies in the market. Therefore, data plays a crucial role in discovering new alpha and addressing two major market problems: portfolio planning and risk control. The speaker also highlights the growing importance of artificial intelligence (AI) and machine learning in handling data.

The next speaker introduces the concept of causal investing, which involves examining the causal relationships between different predictors and target variables, rather than solely analyzing statistical correlations. By utilizing alternative data such as options activity, investors can gain insights into the underlying causes of price movements and enhance the accuracy of their trading strategies. An example of the mean-reverting strategy is cited, emphasizing the importance of understanding why it occasionally fails. Through the use of alternative data to uncover the causes of price movements, investors can make more informed decisions about when to apply their strategies.

The significance of data for market operators, particularly alternative data, is discussed by the following speaker. Alternative data refers to any data that is not already an industry standard and forms a constantly expanding ecosystem with new players and data vendors continually emerging. This data can be sourced from various channels such as credit card transactions, satellite images, mobile device data, weather data, and more. The speaker also mentions the use of natural language processing tools to analyze textual documents and generate sentiment indicators, which can be valuable for investors in complementing their investment strategies.

The process of utilizing alternative data in investment strategies is described by the next speaker. It involves identifying new sources of information, incorporating and transforming the unstructured data into structured data sets. After developing an investment strategy, validation becomes a crucial step that requires understanding the reliability of the data and the statistical significance of the results. The speaker emphasizes the importance of not solely relying on alternative data and also considering market data when creating models.

The speakers delve into the importance of alternative data in capturing market trends and the challenges involved in backtesting such data. While technical traders previously relied on simple metrics like the 120-day moving average, there is now a push to incorporate a wider range of data categories to understand return causes. However, since alternative data is relatively new, there are concerns about how to backtest it and how consistent it remains over time. Understanding the impact of investment strategies necessitates assessing the stability of the system regarding random fluctuations.

The use of alternative data platforms like Bloomberg Icon and Reuters Quantum by traders to develop robust investment strategies is discussed by the speakers. Although these platforms have their own models to quantify various forms of data such as sentiment and news, the speakers recommend traders to create their own models. The importance of utilizing APIs to receive alternative data inputs is highlighted, and the value of organized websites like Credit Suisse in analyzing company announcements is mentioned. Lastly, the speakers note that narrow, specialized approaches can be highly effective in analyzing market behavior.

The speakers move on to discuss the various tools and websites that can be utilized to understand the behavior of different asset classes in the financial market and how to track the market based on investment style and time horizon. While acknowledging that there is no one-size-fits-all solution, they suggest that qualitative information from websites like Bloomberg can be helpful in this regard. They also emphasize the importance of understanding sentiment and alternative data sources such as microblogs and chat rooms. However, they note that it is not necessarily guaranteed that becoming an expert in these areas would lead to a better career in the financial market.

The speaker then explains the difference between developing advanced trading strategies for large funds versus simple strategies for independent traders. It is mentioned that complex techniques may be more suitable for job seekers at large funds, while independent traders are advised to start with a niche strategy that may not be of interest to institutions. This approach helps them avoid the high costs associated with complex data feeds. The speaker further highlights the increasing interest in new data sources for trading, making it a relevant field to learn and pursue. They also mention that they personally utilize alternative data to some extent in their fund management and assist clients in implementing machine learning and natural language processing-based modules or validating their own strategies using data sets.

During the Q&A session, a question is raised about Twitter selling blue ticks and whether verified accounts would carry more weight in natural language processing (NLP). Initially, the panelists have difficulty understanding the question, but later admit that they are not qualified to answer it. The discussion then shifts to traditional financial data sources suitable for beginners and students, with Bloomberg and Definitive mentioned as potential options. The suggestion is made that data providers may offer free data sets with a certain level of interaction.

The speaker subsequently discusses the use of alternative data sources for financial market analysis, specifically mentioning the company DGLT, which collects data from global and local news sources. While acknowledging the effort required to filter out relevant information, it is noted that the collected data can provide a historical perspective on market behavior dating back to the 1800s. When asked whether alternative data should be used as the sole source or for validation alongside traditional data, the speaker states that there is no general rule and it depends on the specific strategy being employed. However, they emphasize that market data remains the primary driver, and alternative data should not be exclusively relied upon.

The speaker concludes the webinar by discussing the use of alternative data in financial markets and how machine learning can be employed to analyze such data. They highlight the necessity of inputting multiple types of data, including price and fundamental data, into machine learning predictive algorithms. However, they also stress that alternative data alone cannot serve as the sole driver and should be combined with market price input. The audience is encouraged to reach out with any further questions they may have.

  • 00:00:00 The host introduces the topic of the webinar, which is understanding financial market behavior and the role of multiple categories of data. The panelists include Professor Gotham Mitra, Dr. Ernest Chan, and Dr. Mateo Campoloni, who have extensive experience in trading and academic careers. The main focus of the webinar is to explore how data from multiple categories play a crucial role in understanding and predicting financial market behavior, which has become increasingly important in recent times. The session is part of the certificate in Sentimentalysis and Alternative Data for Finance offered by Opticks Systems and QuantInsti.

  • 00:05:00 The speaker discusses the importance of data in understanding financial market behavior. While early on, the only data available was market prices, buy and sell orders, and the depth of the book, there are now many more categories of data, including news data, media sentiment data, and alternative data. Despite the efficient market hypothesis, which states that markets ultimately digest all information, there are still short-term market inefficiencies. As a result, data is important for finding new alpha and addressing two major problems in the market: portfolio planning and risk control. The speaker also notes that the AI and machine learning part in knowledge data is becoming increasingly important in the data scene.

  • 00:10:00 The speaker discusses the concept of causal investing, which involves looking at the causal relationships between different predictors and target variables, rather than simply analyzing statistical correlations. With the use of alternative data, such as options activity, investors can understand the underlying causes of price movements and use this information to improve the accuracy of their trading strategies. The speaker cites the example of the mean-reverting strategy and the importance of understanding why it sometimes fails. By using alternative data to uncover the causes of price movements, investors can make more informed decisions about when to run their strategies.

  • 00:15:00 The speaker discusses the importance of data for market operators, specifically alternative data, which refers to any data that is not already an industry standard. Alternative data is a constantly growing ecosystem with new players and vendors of data sets constantly emerging. This data can come from a variety of sources such as credit card transactions, satellite images, mobile device data, weather data, and more. The speaker also mentions the use of natural language processing tools to process textual documents and create sentiment indicators that can be used by investors to complement their investment strategies.

  • 00:20:00 The speaker describes the process of using alternative data in investment strategies, which involves finding new sources of information, embedding the information, and transforming it from unstructured to structured data sets. After creating an investment strategy, the crucial step is validation, which requires understanding the reliability of the data and how statistically significant the results are. Additionally, it is important to not solely rely on alternative data, and to also consider market data when creating models.

  • 00:25:00 The speakers discuss the importance of alternative data in capturing trends in the market and the difficulties that come with backtesting the data. While previously, technical traders relied on simple metrics like the 120-day moving average, there is now a push to include a range of different categories of data to understand the causes of returns. However, alternative data is just that because it didn't exist in the past, there is a question of how to backtest it and how consistent it is over time. The speakers emphasize that understanding the effect of investment strategies requires assessing the stability of the system with respect to random fluctuations.

  • 00:30:00 The speakers discuss the use of alternative data platforms such as Bloomberg Icon and Reuters Quantum by traders to create sound investment strategies. While these platforms have their own models to quantify various forms of data like sentiment data and news data, it is recommended that traders create their own models. Additionally, the speakers talk about the importance of using APIs to receive alternative data inputs and the value of using organized websites like Credit Suisse to analyze company announcements. Finally, the speakers note that narrow, specialized approaches can be quite effective in analyzing market behavior.

  • 00:35:00 The speakers discuss the various tools and websites that can be used to understand the behavior of different asset classes in the financial market, as well as how to follow the market based on investment style and time horizon. While there is no one-size-fits-all solution, qualitative information from websites like Boomberg can be helpful. The speakers also talk about the importance of understanding sentiment and alternative data like microblogs and chat rooms. However, it is not clear whether becoming an expert in these areas would necessarily lead to a better career in the financial market.

  • 00:40:00 The speaker explains the difference between developing advanced trading strategies for large funds versus simple strategies for independent traders. While complex techniques may be better suited for job seekers at large funds, independent traders are better off starting with a niche strategy that may not be of interest to institutions, and avoiding the high costs associated with complex data feeds. The speaker also notes that there is an increasing interest in new sources of data for trading, making it a relevant field to learn and pursue. They also mention that they use alternative data to a certain extent in their fund management and also help clients implement machine learning and natural language processing-based modules or validate their own strategies using data sets.

  • 00:45:00 A question is asked about Twitter selling blue ticks and whether or not verified accounts would be weighted more in natural language processing (NLP). The panelists initially have trouble understanding the question and later admit they are not qualified to answer it. The discussion then moves on to traditional financial data sources for beginners and students, with Bloomberg and Definitive mentioned as potential options, and the suggestion that data providers may offer free data sets with a certain amount of interaction.

  • 00:50:00 The speaker discusses the use of alternative data sources for financial market analysis, specifically mentioning the company DGLT which collects data from global and local news sources. While it can take a lot of work to filter out the necessary information, the data collected can go back to as early as the 1800s, providing a historical perspective on market behaviour. When asked whether alternative data should be used as a sole source or for validation alongside traditional data, the speaker states that there is no general rule and it depends on the specific strategy being used. However, the speaker emphasizes that market data is king and alternative data should not be relied on exclusively.

  • 00:55:00 The speaker discusses the use of alternative data in financial markets and how machine learning could be used to analyze this data. He notes that multiple types of data, including price and fundamental data, would need to be inputted into the machine learning predictive algorithm. However, he also mentions that alternative data cannot be used as a standalone driver and must be coupled with market price input. The speaker concludes the webinar and encourages viewers to reach out with any questions they may have.
Understanding Financial Market Behaviour: The role of multiple categories of data
Understanding Financial Market Behaviour: The role of multiple categories of data
  • 2023.03.02
  • www.youtube.com
Financial markets are influenced by news, (micro) blogs and other categories of online streaming data. These sources of information reach financial market pa...
 

Introduction to Quantitative Factor Investing



Introduction to Quantitative Factor Investing

This video introduces the concept of quantitative factor investing and its classification into different factors, including value, momentum, quality, and size. The speaker explains that factor investing involves selecting securities based on specific factors that are supposed to drive returns and do so for long periods of time. The video covers different quantitative methods that can be used to apply quantitative factor investing, including statistical analysis, factor modeling, machine learning, optimization models, time series analysis, risk models, and montagorial simulation. The speaker also discusses the advantages of using quantitative factor investing and the process of selecting and combining factors, as well as answering questions related to the topic, including about the data sources and the suitability for medium/high frequency trading.

In the webinar, Varun Kumar, a quantitative analyst at QuantInsti, provides a comprehensive introduction to quantitative factor investing. He begins by explaining the concept of factors, which are broad and persistent sources of risk and return that guide investors towards quantifiable returns. Some common factors include value, momentum, quality, size, and volatility. Kumar focuses on the quality factor as an example, which involves investing in companies with high-quality characteristics. Financial ratios such as return on equity and growth rate profitability are used to quantify the quality of a company. Stocks with high ratios and high margins are considered high-quality, while those with lower ratios and margins are considered low-quality. Historical data shows that portfolios consisting of high-quality stocks have generated excess returns over long periods of time.

Kumar then delves into the classification of factors in quantitative factor investing. Factors are categorized into seven types, including macro factors, style-based factors, sectorial factors, ESG-based factors, sentiment-based factors, liquidity-based factors, and technical factors. He provides insights into how each of these factors functions and how they can be used to construct factor portfolios. To illustrate this, he presents examples of strategies built using macroeconomic and style-based factors. These strategies involve utilizing variables such as GDP growth, inflation rate, interest rate, and return on equity to select stocks and build a portfolio. Kumar also highlights the importance of considering factors such as higher return on equity and a low debt-to-equity ratio when selecting stocks for a portfolio.

The webinar further explores various factors that can be incorporated into quantitative factor investing strategies, including style factors, sectorial matrix, ESG criteria, sentiment, liquidity, and technical indicators. Kumar explains how these factors can be utilized to develop a logical framework for constructing portfolios and provides real-world examples of strategies that can be implemented using these factors. He briefly touches upon ESG criteria, which stands for environmental, social, and governance criteria, and its role in rating companies based on their impact on society and the environment.

The utilization of mathematical models and statistical analysis in quantitative factor investing is also discussed. Kumar emphasizes that these methods help eliminate emotional biases from investment decisions and allow for exploration of less intuitive factors. He outlines the seven most commonly used quantitative methods in this field, including statistical analysis, factor modeling, machine learning, optimization models, time series analysis, risk models, and Monte Carlo simulations. The video highlights how statistical analysis can be employed to identify patterns and correlations between securities and factors.

The advantages of quantitative factor investing in the construction and management of investment portfolios are explored in the webinar. One key advantage is the ability to simulate extreme market conditions, which helps investors better understand the limitations of their portfolios. The speaker emphasizes the differences in approach between traditional and quantitative factor investing, using a case study of a large-cap stock portfolio with low price-to-earnings ratios. While traditional investing involves identifying factors, determining the universe of large-cap stocks, and calculating the factors for each stock before sorting them based on the P/E ratios, quantitative factor investing employs data collection, pre-processing, and feature selection. A model is built to predict stock prices based on the selected features.

The process of quantitative factor investing is explained, emphasizing the importance of building accurate models to predict stock prices based on specific features. The speaker highlights that this approach is data-driven and more objective compared to traditional factor investing, enabling more accurate and reliable analysis. To select the best factors for investing, the factors should be persistent, work across different markets and sectors, be robust to various market conditions, not overly sensitive to changes in market ethics, and possess enough liquidity and capacity.

The webinar also covers the combination of factors in quantitative factor investing. Five commonly used methods are discussed, including equal weight and factor scoring, where each factor is scored based on its historical performance and a weighted average is taken to obtain an overall score. The importance of combining factors is highlighted, as it reduces portfolio risk, increases diversification, and minimizes volatility of performance. The speaker outlines five key characteristics of a best factor, including being supported by empirical evidence, having an economic or financial foundation, offering long-term investment opportunities, being investable, and being intuitive and widely accepted.

The speaker goes on to discuss several methods for combining factors in quantitative factor investing. One such method is principal component analysis (PCA), which combines multiple factors into a smaller set of uncorrelated components. This approach reduces the number of factors and addresses the issue of correlated factors, also known as multicollinearity. Another method is factor tilting, which involves adjusting the weights or allocations in a portfolio to emphasize a particular factor. This technique offers flexibility and allows investors to target specific factors. Additionally, machine learning can be leveraged to select or combine factors based on their historical performance, capturing non-linear relationships effectively. The speaker emphasizes the importance of using caution when employing deep learning algorithms, as they require substantial amounts of data and can be prone to overfitting. It is recommended to combine them with traditional statistical methods for optimal results.

Furthermore, the speaker addresses the audience's questions related to quantitative factor investing. The questions cover various topics, such as using price action and long-term charts as factors for investing, where the speaker suggests that it can be used as a technical factor by defining it appropriately and studying its historical performance. The distinction between traded and non-traded factors is explained, with an example of real estate as a non-traded factor due to the difficulty in determining liquidity. The focus of quantitative factor investing is primarily on traded factors, as their data is easily accessible and allows for backtesting. The speaker also provides insights into determining whether a company is more value or growth-focused, suggesting techniques like using the price-to-earnings ratio to define value stocks.

The discussion continues with the exploration of different algorithms used in quantitative factor investing. Algorithms such as recurrent neural networks (RNNs) and long short-term memory (LSTM) are mentioned, with their relevance dependent on the type of data being analyzed. Deep learning techniques can be employed to combine factors and determine optimal weights for each factor, resulting in enhanced portfolio performance. The speaker offers advice on backtesting factor strategies and emphasizes the significance of testing their statistical significance across multiple data sets and markets. The use of Bollinger Bands as a technical indicator to identify sideways markets is also mentioned.

Finally, the webinar concludes with a final Q&A session, where the speaker addresses additional inquiries from the audience. The questions include the role of deep learning algorithms in selecting industry sectors, highlighting various options such as decision trees, neural networks, and random forests. It is emphasized that the selection of the algorithm depends on the specific task and dataset at hand. The speaker reiterates the importance of using deep learning algorithms cautiously due to their data requirements and potential for overfitting. The audience is thanked for their participation, and they are encouraged to provide feedback on the session.

  • 00:00:00 Varun Kumar, a quantitative analyst at QuantInsti, introduces the concept of quantitative factor investing and its classification into different factors such as value, momentum, quality, and size. He explains that factor investing involves selecting securities based on specific factors that are supposed to drive returns, and these factors are technically broad and persistent sources of risk and return. The webinar covers different quantitative methods that can be used to apply quantitative factor investing, and differences between general factor investing and quantitative factor investing. The session concludes with a case study on selecting the best factors and a discussion on how to combine the factors.

  • 00:05:00 The video provides an introduction to quantitative factor investing and explains what factors are. Factors are broad and persistent sources of risk and return, and they guide investors to a particular quantifiable return. Common factors include value, momentum, quality, size, and volatility. To illustrate, the video focuses on the quality factor, which involves investing in companies with high-quality characteristics. The quality of a company is quantified using a combination of financial ratios, such as return on equity and growth rate profitability. A high-quality stock would have high ratios and high margins, while low-quality stocks would have lower ratios and low margins. A portfolio can then be created with a combination of high-quality stocks, which has historically generated excess returns over long periods of time. Factors should be broad and persistent, generating returns across a wide range of assets and over long periods of time, respectively.

  • 00:10:00 The speaker discusses the classification of factors in quantitative factor investing. Factors are classified into seven types, including macro factors, style-based factors, sectorial factors, ESG based factors, sentiment-based factors, liquidity-based factors, and technical factors. They explain how these factors work and how they can be used to create a factor portfolio. They provide examples of strategies built using macroeconomic and style-based factors, which involve using variables like GDP growth, inflation rate, interest rate, and return on equity to select stocks and create a portfolio. The hedge fund manager uses two criteria to select stocks and create a portfolio - higher return on equity and low debt-to-equity ratio.

  • 00:15:00 The speaker discusses various factors that can be used in quantitative factor investing strategies. These factors include style factors, sectorial matrix, ESG criteria, sentiment, liquidity, and technical indicators. The speaker explains how these factors can be used to create a logic for a portfolio and gives examples of strategies that can be implemented using these factors. The speaker also briefly explains ESG criteria, which stands for environmental, social, and governance criteria, and how organizations use it to rate companies based on their impact on society and the environment. Finally, the speaker takes a question on ESG criteria and mentions that they will be discussing it further in the upcoming sections.

  • 00:20:00 The video discusses quantitative factor investing and its use of mathematical models and statistical analysis to identify factors and their relation to stocks. These methods allow for the removal of emotional biases from investment decisions and the ability to explore less intuitive factors. The video also lists the seven most commonly used quantitative methods, including statistical analysis, factor modeling, machine learning, optimization models, time series analysis, risk models, and montagorial simulation. Finally, the video briefly touches on the use of statistical analysis to identify patterns and correlations between securities and factors.

  • 00:25:00 The video introduces quantitative factor investing, which involves using statistical analysis to determine a stock's response to certain factors. This information is then used to design a portfolio, with more money going into stocks that respond more strongly to the identified factors. Machine learning techniques are also discussed as a way of discovering and combining factors and making predictions about future performance. Time series analysis can be used to analyze historical data and identify trends in returns, while risk models and Monte Carlo simulations can aid in risk management. Optimization techniques are used to construct portfolios and maximize factor exposure while minimizing risk and transaction costs.

  • 00:30:00 The video explores the different advantages of using quantitative factor investing in the construction and management of investment portfolios. One key advantage is the ability to simulate extreme market conditions to be able to fully understand the limitations of a portfolio. The video also highlights the core differences in approach between traditional and quantitative factor investing, using a case study of a large cap stock portfolio with low price-to-earnings ratios. The traditional approach involves identifying the factor and determining the universe of large cap stocks before calculating the factor for each stock and sorting them from lowest to highest P/E ratios. In contrast, the quantitative factor investing approach uses data collection, pre-processing, and feature selection before building a model to predict stock prices based on the features.

  • 00:35:00 The speaker explains the process of quantitative factor investing, which involves building a model to predict stock prices based on specific features and evaluating the model's accuracy before constructing a portfolio. This approach is data-driven and more objective compared to traditional factor investing, which allows for more subjective analysis. The primary advantage of using quantitative factor investing is that it provides more accurate and reliable analysis of the data. To select the best factors for investing, the factors should be persistent, work across different markets and sectors, be robust to different market conditions, not overly sensitive to changes in market ethics, and investable with enough liquidity and capacity.

  • 00:40:00 The instructor discusses the five key characteristics of a best Factor which include: being backed by empirical evidence, having economic or financial foundation, offering long-term investment opportunities, being investable, and being intuitive and widely accepted. It is important to combine factors since it reduces portfolio risk, increases diversification, and reduces volatility of performance. There are five commonly used methods of combining factors, including equal weight and Factor scoring, where each factor is scored based on its historical performance, and a weighted average is taken to obtain an overall score. The instructor emphasizes that a good portfolio not only generates high returns, but also performs with stability across multiple cycles and different market dynamics.

  • 00:45:00 The speaker discusses several methods for combining factors in quantitative factor investing. One such method is PCA (principal component analysis) which combines multiple factors into a smaller set of uncorrelated components. This reduces the number of factors and removes the problem of correlated factors, known as multicolinearity. Another method is factor tilting which involves adjusting the weights or allocations in a portfolio to a particular factor. This is flexible and can be used to target specific factors. Finally, machine learning can be used to select or combine factors based on historical performance, capturing non-linear relationships. The speaker then invites questions from the audience and shares some offers for attendees.

  • 00:50:00 The speaker answers several questions related to quantitative factor investing. The first question is about using price action and long-term charts as a factor for investing, to which the speaker responds that it can be used as a technical factor by defining it properly and studying its historical performance. The second question is whether capitalization is a factor, to which the speaker says that size is a factor, and capitalization can be used as one of the factors to determine a strategy depending on the market conditions. The speaker also answers a question about where to get the data, mentioning websites such as Yahoo Finance and paid APIs like Alpha Vantage. Lastly, the speaker responds to a question about how to use quantitative factor investing in medium/high frequency trading, stating that factor investing is more suitable for long-term investors.

  • 00:55:00 The algorithms are particularly useful for selecting industry sectors. There are various deep learning algorithms that can be used for this purpose such as decision trees, neural networks, and random forests. It depends on the specific task and data set at hand. However, it's important to note that deep learning algorithms should be used with caution as they require large amounts of data and can be prone to overfitting. It's recommended to use them in combination with traditional statistical methods for optimal results.

  • 01:00:00 The speaker discusses the different algorithms used in quantitative factor investing, such as RNN and LSTM, and how they are dependent on the type of data being analyzed. Deep learning can be used to combine factors and determine the weights to give each factor for optimal performance. The speaker also provides advice on backtesting a factor strategy and testing its statistical significance across multiple data sets and markets. They suggest using Bollinger Bands as a technical indicator to identify sideways markets. The difference between traded and non-traded factors is also explained, with traded factors being based on publicly traded securities and non-traded factors being those that cannot be captured in public markets.

  • 01:05:00 The speaker discusses the difference between traded and non-traded factors, using real estate as an example of a non-traded factor because the liquidity cannot be easily determined. The focus of quantitative factor investing is on traded factors, as the data is easily accessible and public, making it possible to backtest. The speaker also answers a viewer question on how to determine whether a company is more value or growth focused, suggesting techniques such as using the price-to-earnings ratio to define value stocks. Finally, the audience is thanked for their participation and encouraged to provide feedback on the session.
Introduction to Quantitative Factor Investing
Introduction to Quantitative Factor Investing
  • 2023.02.28
  • www.youtube.com
This session covers the concept of factor investing and different types of factor investing strategies including a discussion of passive vs active investing ...
 

Machine Learning for Options Trading



Machine Learning for Options Trading

In the webinar on machine learning for options trading, the speaker, Varun Kumar Patula, starts by providing an introduction to machine learning and its fundamental purpose. He explains that machine learning algorithms are used to analyze data and discover patterns that may go unnoticed by humans. Varun distinguishes between artificial intelligence, machine learning, and deep learning, emphasizing that machine learning is a subset of AI focused on training models to make predictions or decisions based on data. He further categorizes machine learning into three types: supervised learning, unsupervised learning, and reinforcement learning, each with its own characteristics and applications.

The speaker then delves into the application of machine learning in options trading, a key focus of the webinar. Options trading involves the buying or selling of options contracts, which grant the holder the right to buy or sell an asset at a specified price within a specific time frame. Varun highlights the high risk involved in options trading and explains how machine learning can enhance analysis accuracy, thereby reducing risk. He elaborates on the various applications of machine learning in options trading, including pricing options, designing trading strategies, calculating volatility, and forecasting implied volatility. These applications aim to improve decision-making and increase profitability in options trading.

To understand the need for machine learning in options trading, the limitations of traditional models like the Black-Scholes model are discussed. The Black-Scholes model assumes constant risk-free rate and volatility, which may not hold true in real-world scenarios. Varun mentions alternative models like the German Candy model and Heston model, which have their own limitations and input parameter requirements. The solution proposed is to utilize machine learning as a replacement or combination of these models, as it allows for an expanded set of features and input parameters. Machine learning models can consider factors like implied or realized volatility, interest rates, and other relevant features to determine the fair price of options. This enables more accurate pricing, selection of strike prices, and hedging strategies. Varun highlights that empirical research indicates deep learning models with multiple hidden layers, such as the multi-layer perceptron model, outperform the Black-Scholes model, particularly for options that are way out of the money or at the money.

The webinar proceeds to explore the optimization of trading decisions using machine learning models for option strategies. The general process involves analyzing the underlying asset's bullish or bearish sentiment and selecting a suitable strategy accordingly. However, many option strategies have skewed risk-reward distributions, necessitating a more refined analysis. Machine learning can enhance this analysis by considering features like past returns, momentum, and volatility to provide insights on the underlying asset. These features are then inputted into a machine learning model to classify the next trading period as bullish or bearish. The video also touches upon the features used in SP500 index data and emphasizes the importance of feature analysis in option strategy decisions.

Next, the speaker focuses on constructing machine learning models for trading decisions on vertical option spreads. They explain that the input parameters remain the same as in the previous example, where a decision tree classifier is used to classify the next trading day as bullish or bearish. To take advantage of options, spreads like bull call spreads or bear put spreads are introduced, as they limit the risk. Machine learning models are combined to forecast the trading range and volatility of the contract. By leveraging these combined models, traders can determine the optimal settings for vertical spreads in their trading strategies while forecasting implied volatility, which is crucial in options trading.

Another application of machine learning in options trading is forecasting implied volatility and making calculated decisions on option strategies. By inputting historical implied volatility and other relevant features into machine learning models, traders can forecast volatility and select appropriate strategies like short straddles or short strangles. The speaker shares a case study where a machine learning model was built to predict the most suitable option strategy based on a list of strategies and input features, including underlying data and options data. By designing a strategy universe and expanding the study to include different contracts, traders can utilize machine learning to create and select the best strategy that aligns with their trading objectives.

In the webinar, the speaker describes how they created 27 different strategies for option trading by exploring various combinations of positions and contracts. To refine the strategies, they filtered them down to 20 by eliminating combinations that lacked a position in a caller or relied on impractical combinations like short straddles. To determine which of these 20 strategies would provide maximum returns, the speaker employed a machine learning model, specifically a long short-term memory (LSTM) model. This model incorporated input features from underlying assets, options, and volatility, and utilized a multi-class classification system to identify the optimal strategy for deployment.

The video also sheds light on the features related to option grades and the structure of the neural network used for the LSTM model. Training the model on approximately 10 years of data, it generated strategy labels based on the input features. The results demonstrated that the machine learning model outperformed the underlying asset over time. To enhance the prediction accuracy of machine learning models for options, the speaker recommends several best practices. These include utilizing probability levels for fine-tuning, employing multiple vision models, implementing the voting classifier technique, and leveraging the output of multiple classifiers to train another machine learning model for improved accuracy and profitability.

Furthermore, the speaker explores methods to improve the performance of classification models in options trading. These methods involve utilizing probability levels, employing ensemble techniques by combining multiple classifiers, and using machine learning models to aggregate the outputs of different models. The importance of hyperparameter tuning and cross-validation techniques is emphasized to achieve greater accuracy in the models. The speaker also stresses the significance of paper trading before deploying a strategy with real money. This practice enables traders to identify and address any practical issues or challenges before risking actual capital.

During the Q&A session that follows, the speaker addresses questions from attendees. The questions cover various topics, including the performance of the machine learning strategy for options trading, the methodology used to select features for the model, the benefits of machine learning over existing technical indicators, the calculation of feature importance, and the appropriate holding period for the SPY (Standard & Poor's 500 Index). The speaker clarifies that the strategy's performance is not solely due to market direction in 2020, as the data used for the model extends back to 2010 and encompasses periods beyond 2020. They explain that options trading necessitates a more intricate analysis, considering factors like option Greeks and implied volatility, making machine learning a valuable tool. The selection of features for the model is based on a combination of trading experience and informed decision-making.

Towards the end of the webinar, the speaker discusses the prerequisites for the accompanying course, recommending prior knowledge of machine learning and related courses to maximize its benefits. While the course primarily focuses on building machine learning models for S&P 500 options trading, the concepts can be adapted and applied to other contracts with further training and customization. The course does not provide a pre-built machine learning model, but it equips participants with the knowledge and skills required to construct their own models.

The webinar provides a comprehensive overview of machine learning's application in options trading. It covers the basics of machine learning, its distinctions from other related fields, and the three types of machine learning algorithms. The webinar emphasizes the need for machine learning in options trading due to its ability to enhance analysis accuracy and mitigate risk. Various applications of machine learning in options trading, including pricing options, designing trading strategies, and forecasting implied volatility, are discussed. The webinar also explores the construction of machine learning models for vertical option spreads and the optimization of trading decisions.

  • 00:00:00 The speaker, Varun Kumar Patula, introduces the agenda for the webinar on machine learning for options trading. He begins with a brief introduction to machine learning and its core purpose of using machine learning algorithms to understand or analyze data and find internal patterns that humans typically miss. Varun then explains the differences between artificial intelligence, machine learning, and deep learning. He also notes that there are three types of machine learning algorithms: supervised learning, unsupervised learning, and reinforcement learning. Finally, he highlights the need for the application of machine learning in options trading and dives into the major applications that are both in research and practice, as well as the best practices to keep in mind when applying machine learning techniques for options trading.

  • 00:05:00 The speaker introduces the concept of machine learning and its application in various fields, especially in financial services such as algorithmic trading, portfolio management, and fraud detection. The focus of this webinar is on the application of machine learning for options trading. The speaker explains that options trading involves buying or selling options contracts, which offer the choice to buy or sell at a particular set price and specific debt. Traders use options trading for hedging, income generation, or speculation. The speaker highlights the high risk involved in options trading and explains how machine learning can increase the accuracy of analysis, thereby reducing the risk. Machine learning is used in pricing options, designing trading strategies, calculating volatility, and forecasting the implied volatility of an option. The section concludes by discussing the limitations of the commonly used Black-Scholes model.

  • 00:10:00 The limitations of the Black-Scholes model, such as assuming constant risk-free rate and volatility, are discussed, along with other models like the German Candy model and Heston model, which have their own limitations regarding input parameters. The solution proposed is to implement machine learning as a replacement or combination of these models since machine learning allows for an increase in the feature set and expanded input parameters, unlike traditional models. The ML model can identify the fair price of options by using implied or realized volatility, interest rates, and other features as input, allowing for pricing, strike price selection, and hedging applications. Empirical research shows that the best-performing model is the deep learning model with multiple hidden layers, the multi-layer perceptron model, which outperforms the Black-Scholes model, especially when way out of the money or at the money.

  • 00:15:00 The video discusses how machine learning can be used to optimize trading decisions using option state strategies. The general process for a trader involves analyzing the underlying asset and deciding whether it is bullish or bearish, and based on that, selecting a strategy. However, many option strategies are highly risky with a skew in the risk-reward distribution, so implementing machine learning can improve the analysis of the underlying asset and give better sentiment analysis. The scheme for constructing an ML architecture involves using ML models to do sentiment analysis or forecast the underlying asset. Features such as past returns, momentum, and volatility are used to give information on the asset, and these are input into the machine learning model to classify whether the next trading period will be bullish or bearish. The video also discusses the features used in the SP500 index data and the importance of feature analysis.

  • 00:20:00 The speaker discusses constructing machine learning models for trading decisions on vertical option spreads. The input parameters remain the same as in the previous example, where a decision tree classifier is used to classify the next trading day as bullish or bearish. To take advantage of options, spreads are introduced, such as bull call spreads or bear put spreads, where the risk is limited. The idea of combining machine learning models comes into play as one model forecasts the trading range, and another model forecasts whether the contract will be highly volatile or low. By using these models' combinations, a trader can decide the optimal vertical spread settings for trading strategies, while also forecasting implied volatility, which is especially important for option trading.

  • 00:25:00 The speaker explains how machine learning models can be used in options trading by forecasting implied volatility and making calculated decisions on which strategy to take. By inputting historical implied volatility and other features as inputs for machine learning models, traders can forecast volatility and take positions accordingly with strategies such as short straddle or short strangle. The speaker then describes a case study where a machine learning model was built to predict which option strategy to deploy based on a list of strategies and input features such as underlying data and options data. By designing the strategy universe and expanding the study to include different contracts, traders can use machine learning to create and choose the best strategy for their trading needs.

  • 00:30:00 The speaker explains how they created 27 different strategies for option trading using various combinations of positions and contracts. They filtered these strategies down to 20 by removing combinations that didn't include a position in a caller, or relied on impractical combinations such as short straddles. They then used a machine learning model, specifically a long short-term memory model, to determine which of these 20 strategies would provide the maximum returns. The model took input features from underlying assets, options, and volatility, and used a multi-class classification system to determine the best strategy to deploy.

  • 00:35:00 The video discusses the use of certain features related to option grades and the structure of the neural network used for the LSTM model. The model is trained on about 10 years of data and provides strategy labels based on the input features. The results show that it outperforms the underlying asset with time. The best practices suggested for better prediction of machine learning models for options include using probability levels for fine-tuning, using multiple vision models, using the voting classifier technique, and giving the output of multiple classifiers to another ML model for better accuracy and profitability.

  • 00:40:00 The speaker discusses methods to improve the performance of a classification model for options trading such as using probability levels, multiplying trees, combining different classifiers through working class fair techniques, and using a machine learning model to take the output of multiple models as input. The speaker also emphasizes the importance of hyperparameter tuning and cross-validation techniques for greater accuracy. Additionally, the importance of paper trading before deploying a strategy is highlighted as it allows one to identify any practical problems before using real money. A Q&A session follows, with one attendee asking about the speaker's experience.

  • 00:45:00 The speaker discusses the use of Delta in options trading, stating that it can be a profitable strategy depending on the risk-reward capacity and the underlying assets in the portfolio. They caution against relying solely on a Delta hedge strategy and suggest using it in conjunction with other strategies. The speaker also addresses questions about using models that don't match market prices, calculating feature importance, and the holding period for the SPY. They explain how to calculate feature importance and state that different holding periods can be used for forecasting the underlying asset.

  • 00:50:00 The speaker addresses questions from viewers related to the performance of the machine learning strategy for options trading and the methodology used to arrive at the features for the model. They explain that the strategy's performance is not solely due to the market being directional in 2020, as the data used for the model extends back to 2010 and goes beyond 2020. When asked about the benefits of machine learning over existing technical indicators, the speaker emphasizes that options trading requires a more complex analysis of data, including option Greeks and implied volatility, making machine learning a valuable tool. Finally, the speaker explains that features for the model were selected based on a combination of experience in trading and informed decisions.

  • 00:55:00 The speaker discusses the various factors that go into making informed trading decisions using machine learning, such as past returns and technical indicators. They also mention the use of features commonly taken by manual traders and brokers. In response to a question about the LSTM model, they explain that while the current results are based on daily data, high or medium frequency trading algorithms can also use tick-by-tick data. Another question asks about the number of trades in the training set, to which they explain that it depends on the case and the ratio was 70:30. Lastly, they differentiate between blending and stacking Ensemble models and explain how blending involves taking the outputs of multiple models to train a fresh model.

  • 01:00:00 The course covers the basics of machine learning and its application to options trading. The course focuses on building machine learning models specifically for SP500 options trading, but the concepts can be applied to other contracts with further training and tuning. The course does not provide a ready-to-use machine learning model, but it gives the knowledge and skills required to build one.

  • 01:05:00 In this section, the speaker discusses the prerequisites for the course and mentions that prior knowledge of Machine Learning and related courses would be helpful to make the most out of this course. The speaker also acknowledges the numerous questions received and assures the audience that they will be answered at the end of the webinar through a survey. The webinar is concluded with the speaker thanking the audience and encouraging them to provide feedback to improve future sessions.
Machine Learning for Options Trading
Machine Learning for Options Trading
  • 2023.01.19
  • www.youtube.com
This session explains the application of machine learning for options trading. It covers the process of creating options trading strategies using machine lea...
 

Portfolio Assets Allocation with ML and Optimization for Dividend Stocks | Algo Trading Project



Portfolio Assets Allocation with ML and Optimization for Dividend Stocks | Algo Trading Project

The first presentation at the event is delivered by Raimondo Mourinho, an independent AI and big data engineer known for his work with small and medium companies in Italy, providing AI solutions for various corporate functions. Mourinho believes in combining machine learning techniques, statistics, and probability to create advanced trading systems. In his presentation, he shares his practical and scalable framework for developing machine learning models in portfolio asset allocation.

Mourinho begins by introducing the key components required to design such a system. He emphasizes the importance of adopting a portfolio mindset, utilizing machine learning models to convert ideas into actionable strategies, and leveraging the power of multi-cpu, multi-core, and GPU capabilities. These ingredients form the foundation of his framework. While he briefly mentions the need for an infrastructure when going live, he focuses on the elementary blocks of the framework for low-medium frequency trading, acknowledging that the final part of the framework is beyond the scope of the presentation.

The speaker then delves into the competencies necessary for building a robust framework for portfolio asset allocation using machine learning and optimization for dividend stocks in Python. He emphasizes the need for a strong understanding of portfolio techniques, object-oriented programming, multi-processing techniques, and asynchronous programming. Additionally, expertise in hyper-parameter optimization tools, SQL language, and Docker technology is deemed valuable. Mourinho proceeds to explain the first step of the framework, which involves optimizing a database for time series, data preprocessing, handling missing data and outliers, normalizing data, and performing asset selection within the designated asset universe.

The presentation moves on to discuss the alpha generation phase, which corresponds to the machine learning terminology for generating trading signals. Mourinho highlights that during this phase, traders incorporate their ideas using various indicators, sentiment analysis, and econometric models. The subsequent step involves feature selection, where redundant features, such as constant and quasi-constant features, non-stationary features, and linearly correlated features, are removed using a rank-based method. Additionally, he mentions the utilization of fractional differentiation, a technique that maintains desired stationarity while preserving crucial information within the features. These improvements are integral to Mourinho's framework for portfolio asset allocation using machine learning and optimization for dividend stocks.

Rebalancing, which includes asset selection and weight allocation, is thoroughly explained in the learning pipeline. Mourinho employs cross-sectional momentum, based on relative strength between assets, for asset selection. For weight allocation, he combines traditional techniques like the critical line algorithm, inverse volatility portfolio, and equal-weighted portfolio with machine learning models such as hierarchical risk parity and hierarchical equal risk contribution. The speaker showcases simulation results and evaluates performance using historical data. He also mentions his intention to further enhance the portfolio by incorporating techniques like the Drunken Monkey strategy and combinatorial purged cross-validation. Moreover, Mourinho stresses the significance of effective money management when applying these techniques to live trading scenarios.

To address parameter variability estimation, Mourinho recommends employing techniques such as Monte Carlo simulation and bootstrapping. He presents the results of his analysis, focusing on terminal wealth and maximum drawdown percentiles. The speaker emphasizes the importance of remaining data-driven and not becoming overly attached to specific trading ideas. He also advises mitigating idiosyncratic risk by employing different techniques and avoiding overfitting by selecting simpler systems with comparable performance. Lastly, he underscores the need to continuously monitor and adjust live trading systems due to the non-stationary nature of time series data.

During the Q&A session, Mourinho responds to several questions from the audience. One participant asks about the most critical step in the pipeline, to which Mourinho highlights data preprocessing as essential and time-consuming. Another query revolves around data normalization, and Mourinho suggests the common practice of subtracting the mean and dividing by the standard deviation in most cases. Regarding removing linear correlation using Principal Component Analysis (PCA), he acknowledges its possibility but cautions about the potential loss of meaning in the features and suggests considering models like the Sharpe Ratio to interpret the results effectively.

The speaker proceeds to discuss the use of PCA for feature selection and its potential impact on the interpretability of the features. Aspiring quantitative and algorithmic traders are advised to consider EPAT (Executive Program in Algorithmic Trading) as a valuable starting point. They highlight that the program offers comprehensive learning objectives aligned with the industry's requirements. Attendees of the webinar are offered an extended early bird admission to the program and can book a course counseling call to understand how it can assist them in achieving their career goals, whether it's establishing an algorithmic trading desk or incorporating advanced technologies and tools into their trading strategies.

Kurt Celestog, a project manager at Hong Kong Exchange and Clearing Limited, takes the stage to share his project on portfolio management, which extends Jay Palmer's lecture on quantitative portfolio management. Celestog's project focuses on optimizing dividend yield through portfolio management. His objective is to generate a regular dividend income stream while ensuring stability and growth in dividend payouts, all while maintaining the portfolio's value. He aims to surpass the benchmark index or ETF in both dividend yield and price return through optimal portfolio management techniques. Celestog faced the challenge of acquiring dividend data and developed web scraping functions to download it. He divided the dataset into two parts, each covering ten years and encompassing economic recessions and expansions.

The speaker discusses the challenges encountered during the data cleansing process for dividend stock portfolio optimization. The data obtained from the website was not clean and required modifications and normalization to express dividends in dollar amounts, especially with early dividends initially presented as percentages. Price data was sourced from Yahoo Finance, and metrics such as annual dividend yield, dividend growth, and average growth were calculated. A composite ratio was derived for all selected stocks to create two portfolios: an equally weighted portfolio and a weight-optimized portfolio. The speaker aimed to analyze whether a single optimization, followed by a ten-year holding period, would outperform the benchmark and the ETF.

The speaker then shares the results of the portfolio optimization project utilizing machine learning techniques. The graphs presented depict green bubbles in the top left quadrant, representing the five stocks with the highest combined metric. Both the equal-weighted and optimally weighted portfolios exhibited higher mean returns and dividend yields than the benchmark. However, over the next ten years, banking and technology stocks gained more popularity, causing the optimized portfolio's performance to decline relative to the benchmark. To improve performance, the speaker experimented with rebalancing the portfolios regularly and selecting the best five stocks based on the chosen metric. The rebalanced portfolios outperformed the benchmark and demonstrated a higher dividend yield.

The speaker emphasizes how portfolio optimization and regular rebalancing can lead to higher dividend yields and outperform benchmark indices, especially with dividend stocks like Real Estate Investment Trusts (REITs). By rebalancing portfolios every six months and exploring different look-back periods, the speaker successfully outperformed the index in terms of average dividend yield, dividend growth, return, and lower drawdowns. However, they acknowledge the challenges in obtaining and cleansing data and note that the rebalancing function can be complex, suggesting the use of object-oriented programming to address this complexity. Overall, the speaker highlights that portfolio optimization and regular rebalancing are valuable tools for investors.

The speaker points out that frequent portfolio rebalancing is crucial for achieving outperformance. However, due to the infrequency of dividend data availability for dividend stocks, it is challenging to rebalance more frequently than once or twice a year. The speaker also emphasizes the need for further work on the project, including exploring different optimization criteria, incorporating more stocks into the portfolio for increased diversification, and conducting extensive backtesting. They suggest expanding the universe of reads and discussing the impact of transaction costs on portfolio performance.

During the Q&A session, Celestog answers questions from the audience. One participant asks about the performance of the equal-weighted portfolio compared to the optimized portfolio. Celestog explains that the equal-weighted portfolio generally performed well, but the optimized portfolio yielded higher returns, demonstrating the effectiveness of portfolio optimization techniques. Another attendee inquires about the impact of transaction costs on the portfolio's performance. Celestog acknowledges that transaction costs can have a significant impact and suggests incorporating them into the optimization process to obtain a more accurate representation of real-world performance. He also mentions the importance of considering slippage in live trading scenarios and advises participants to thoroughly test their strategies using historical data before implementing them in live trading.

Overall, the presentations at the webinar shed light on the practical aspects of portfolio asset allocation using machine learning and optimization techniques for dividend stocks. The speakers highlighted the importance of data preprocessing, feature selection, rebalancing, and regular monitoring to achieve successful results. They also emphasized the need for continuous learning, adaptability, and exploration of different strategies to navigate the dynamic nature of the financial markets. The audience gained valuable insights into the challenges, techniques, and potential benefits of utilizing machine learning in portfolio management.

  • 00:00:00 The first presentation is on portfolio asset allocation presented by Raimondo Mourinho. Mourinho is an independent AI and big data engineer who works with various small and medium companies in Italy to come up with AI end-to-end solutions for the corporate functions like marketing, HR, sales, and production. He believes in combining machine learning techniques with statistics and probability to design superior trading systems. In the presentation, Mourinho shares his practical and scalable framework for machine learning development in portfolio asset allocation.

  • 00:05:00 The speaker introduces the scalable framework for portfolio weight allocation and explains the ingredients needed to design such a system. The three ingredients include designing a system with a portfolio mindset, using machine learning models to convert ideas, and leveraging multi-cpu, multi-core, and GPU capabilities. The speaker also shares the elementary blocks of the framework for low-medium frequency trading and briefly mentions the need for an infrastructure when going live. The speaker does not cover the last part of the framework as it is out of the scope of the presentation.

  • 00:10:00 The speaker discusses the competencies required to build a framework for portfolio asset allocation using machine learning and optimization for dividend stocks in Python with classes. Competencies such as knowledge of portfolio techniques, object-oriented programming, multi-processing techniques, and asynchronous programming are necessary. The use of hyper-parameter optimization tools, knowledge of SQL language, and Docker technology is also important. The speaker then moves on to discuss the first step of the framework, which involves optimizing a database for time series, data preprocessing, dealing with missing data and outliers, data normalization, and performing asset selection within the asset universe.

  • 00:15:00 The speaker discusses the alpha generation phase in terms of machine learning terminology, which is commonly known among traders as the alpha generation phase. During this phase, the trader adds any ideas that come to their mind using various indicators, sentiment analysis, and econometric models. The next step is the feature selection phase, where unnecessary features are removed, including constant and quasi-constant features, non-stationary features, and linearly correlated features using a rank-based method. The speaker also mentions the use of fractional differentiation, which allows for desired stationarity while retaining some information within the feature itself. These are the improvements the speaker is working on as part of his framework for portfolio asset allocation with ML and optimization for dividend stocks.

  • 00:20:00 The speaker explains the rebalancing phase of the learning pipeline, which involves asset selection and weight allocation. For asset selection, the speaker uses cross-sectional momentum based on relative strength between assets. For weight allocation, traditional techniques like the critical line algorithm, inverse volatility portfolio, and equal weighted portfolio are used along with machine learning models like hierarchical risk parity and hierarchical equal risk contribution. The results of simulations are shown, and the speaker evaluates performance using historical data. The speaker plans to improve the portfolio by adding techniques like the Drunken Monkey strategy and combinatorial purged cross-validation. Lastly, the speaker emphasizes the importance of money management when applying these techniques to live trading.

  • 00:25:00 The speaker discusses the importance of estimating the range of variability of parameters and suggests using techniques such as Monte Carlo simulation and bootstrapping to accomplish this. They then present the results of their analysis focusing on terminal wealth and maximum drawdown percentiles. The speaker emphasizes the need to be data-driven and to not fall in love with trading ideas. They also recommend mitigation of idiosyncratic risk by using different techniques and avoiding overfitting by choosing simpler systems with comparable performance. Finally, they stress the need to monitor and adjust live trading systems due to the highly non-stationary nature of time series.

  • 00:30:00 The speakers discuss a few questions from the audience about the portfolio assets allocation using ML and optimization for dividend stocks. One audience member asks which step in the pipeline deserves the most attention, to which Raymond replies that data pre-processing is essential and the most time-consuming step. Another question asks about data normalization, and Raymond suggests that subtracting the mean and dividing by the standard deviation works well in most cases. Lastly, when asked about removing linear correlation using PCA, Raymond mentions that it is possible but warns that it could result in losing the meaning of the feature and suggests using models like Sharpe Ratio to explain the results.

  • 00:35:00 The speaker discusses the use of PCA for feature selection and the potential loss of meaning of the features after applying PCA. He advises aspiring quantitative and algorithmic traders to consider EPAT as a great start and mentions that the program offers comprehensive learning objectives aligned with the industry's needs. The early bird admission for the program is extended to webinar attendees, and they can book a course counseling call to understand how the program can help achieve their career goals, including starting an algo trading desk or applying advanced technologies and tools in their trading strategies.

  • 00:40:00 Kurt Celestog, a project manager at Hong Kong Exchange and Clearing Limited, shares his project on portfolio management, which extends Jay Palmer's quantitative portfolio manager lecture to optimizing dividend yield through portfolio management. His motivation is to obtain a regular dividend income stream while also ensuring that his dividend payouts are stable and grow with time and that the portfolio value does not decrease over time. He aims to beat the benchmark index or ETF in both dividend yield and price return through optimal portfolio management techniques. Celestog faced the challenge of obtaining dividend data and had to code web scraping functions to download it and divided the dataset into two parts, 10 years each, covering economic recessions and expansions.

  • 00:45:00 The speaker discusses the challenges faced in data cleansing for dividend stock portfolio optimization. The data from the website was not clean, which had to be modified and normalized for dividends to be expressed in dollar amounts with early divendends being in percentages. Price data was obtained from Yahoo finance and metrics such as annual dividend yield, dividend growth, average growth among other price metrics were calculated from the data. A composite ratio was calculated for all the different selected stocks which was used to create two portfolios, one equally weighted, and the other, weight-optimized portfolio. The speaker wanted to analyze whether just one optimization, followed by keeping the portfolio for ten years, would outperform the benchmark and the ETF.

  • 00:50:00 The speaker discusses the results of their portfolio optimization with machine learning project. The top left quadrant of the graphs displays green bubbles representing the five stocks with the highest combined metric. The speaker calculated the equal-weighted and optimally weighted portfolios, both with a higher mean return and dividend yield than the benchmark. However, in the next ten years, banking and technology stocks grew more popular and the optimized portfolio started performing worse than the benchmark. The speaker tried to improve its performance by rebalancing every period and selecting the best five stocks based on the chosen metric. The rebalanced portfolios outperform the benchmark and have a higher dividend yield.

  • 00:55:00 The speaker discusses how portfolio optimization and regular rebalancing can achieve a higher dividend yield and outperform benchmark indices, especially with dividend stocks like Real Estate Investment Trusts (REITs). By rebalancing portfolios every six months and using different look-back periods, the speaker was able to outperform the index both in terms of average dividend yield, dividend growth, return, and lower drawdowns. However, obtaining and cleansing data proved to be challenging, and the function for rebalancing was complex, which could be addressed using object-oriented programming. Overall, the speaker suggests that portfolio optimization and regular rebalancing can be valuable tools for investors.

  • 01:00:00 The speaker notes that frequent portfolio rebalancing is necessary for outperformance, but the infrequency of dividend data for dividend stocks or reads makes it difficult to rebalance more frequently than once or twice a year. The speaker also highlights the need for further work on the project, such as exploring different optimization criteria, adding more stocks to the portfolio for greater diversification, and backtesting more extensively. They also suggest extending the universe of reads and stocks and keeping a personal database due to the limited history and survivorship bias. Finally, they answer audience questions about the limited market region used in the project and the weight optimization procedure used.

  • 01:05:00 The speaker discusses how outliers can affect machine learning models, particularly linear regression and neural networks. These models are highly sensitive to outliers, and therefore, the speaker recommends treating outliers using techniques like interquartile ranges, lasso, and ridge regression. However, he suggests that linear models still provide the best results in trading, so it's important to treat outliers. The speaker also offers advice on what it takes to become an algo trader, recommending a multi-disciplinary approach that includes understanding the markets, microstructure, coding skills, and machine learning concepts.

  • 01:10:00 The speaker discusses the importance of learning and understanding how to apply programming languages, such as Python, to diversify and manage one's investment portfolio effectively. They highlight the benefits of taking a comprehensive course in algo trading that covers market functions, coding, and risk management, even for those who do not intend to engage in high-frequency trading. The course's intensity and comprehensiveness offer something for everyone and provide a good foundation for personal use in one's financial life. The speakers conclude with a discussion of their future plans and the demand for further exploration of topics related to algo trading in upcoming sessions.
Portfolio Assets Allocation with ML and Optimization for Dividend Stocks | Algo Trading Project
Portfolio Assets Allocation with ML and Optimization for Dividend Stocks | Algo Trading Project
  • 2022.12.13
  • www.youtube.com
EPAT project presentations on “Portfolio Asset Allocation with Machine Learning: A Practical and Scalable Framework for Machine Learning Development” by two ...
Reason: