Discussing the article: "The case for using a Composite Data Set this Q4 in weighing SPDR XLY's next performance"

 

Check out the new article: The case for using a Composite Data Set this Q4 in weighing SPDR XLY's next performance.

We consider XLY, SPDR’s consumer discretionary spending ETF and see if with tools in MetaTrader’s IDE we can sift through an array of data sets in selecting what could work with a forecasting model with a forward outlook of not more than a year.

SPDR’s consumer discretionary ETF XLY was inducted on 22nd December 1998 and has grown to an AUM of slightly over U$16 billion as of November 2023. The bling ETF among SPDR’s 11, it provides investor’s exposure to specialty retail, hotels, luxury goods & apparel, automobiles and companies providing other non-essential expenditures that consumers may indulge in. The definition though could present some ambiguity such as whether AMZN, for instance, is really a discretionary goods seller or a staple goods seller or a tech company. The latter two are all covered by separate ETFs so investors seeking AMZN exposure can only get it from this sector ETF. In principle though the ETF is for companies that sell non-essential goods, that are purchased when consumers have some disposable income, and as a result this ETF is traditionally exposed to a lot of economic cycles.


MQL5 language with MetaTrader’s IDE are a hotbed for coding a developing not just indicators, and scripts to assist in manual trading, but also a place for developing autonomous trading systems as most readers would be well aware. Trading ETFs on the platform of course depends if your Broker offers them. Time series analysis and forecasting is a useful part of a trade system that we will focus on for this article. In doing so we will consider various time series data sets as candidates for a model to make projections for XLY’s performance. Taking this a step further, we’ll see if there is any gains/ benefit in using a composite data set that brings together the features (data columns) with the highest feature importance weightings into a single data set for running with our model.

Author: Stephen Njuki