Time Series Classification using a Frequency Domain EM Algorithm

Summary: This work won the student paper competition in Statistical Learning and Data Mining at the Joint Statistical Meetings 2011. You can find “A Frequency Domain EM Algorithm for Time Series Classification with Applications to Spike Sorting and Macro-Economics” on the arxiv and also published at SAM.

Let’s say you have n time series and you want to classify them into groups of similar dynamic structure. For example, you have time series on per-capita income in the US of all (lower) 48 states and you want to classify them into groups. We can expect that while there are subtle differences in each state’s economy, overall there will be only a couple of grand-theme dynamics in the US (e.g., east coast and mid-west probably have different economic dynamics). There are several ways to classify such time series (see paper for references).

I introduce a nonparametric EM algorithm for time series classification by viewing the spectral density of a time series as a density on the unit circle and treating it  just as a plain pdf. And what do we do to classify data in statistics/machine learning?: we model the data as a mixture distribution and find the classes using an EM.  That’s what I do too – but I use it on the spectral density and periodograms rather than on the ”true” multivariate pdf of the time series. Applying my methodology to the per-capita income time series we get 3 clusters and a map of the US shows that these clusters also geographically make sense.

May the ForeC be with you: R package ForeCA v0.2.0

I just submitted a new, majorly improved ForeCA R package to CRAN.  Motivated by a bug report on whiten() I went ahead and rewrote and tested lots of the main functions in the package; now ForeCA is as shiny as never before.

For R users there isn’t a lot that will change (changelog): just use it as usual as foreca(X), where X is your multivariate (approximately) stationary time series (as a matrix, data.frame, or ts object in R).

library(ForeCA)

ret <- ts(diff(log(EuStockMarkets)) * 100)
mod <- foreca(ret, spectrum.control = list(method = "wosa"))
mod
summary(mod)
plot(mod)

I will add a vignette in upcoming versions.

ForeCA: Forecastable Component Analysis

Forecastable component analysis (ForeCA) is a novel dimension reduction (DR) technique to find optimally forecastable signals from multivariate time series (published at JMLR).

ForeCA works similar to PCA or ICA, but instead of finding high-variance or statistically independent components, it finds forecastable linear combinations.

ForeCA is based on a new measure of forecastability that I propose. It is defined as

where

is the entropy of the spectral density of the process . You can easily convince yourself that , and equals 1 for a (countable sum of) perfect sinusoid. Thus larger values mean that the signal is easier to forecast. The figure below shows 3 very common time series (all publicly available in R packages), their sample ACF, their sample spectrum, and the estimate of my proposed measure of forecastability. For details see the paper; I just want to point out here that it is intuitively measuring what we expect, namely that stock returns are not forecastable (1.5%), tree ring data is a bit more (15.86%), and monthly temperature is very much forecastable (46.12%). In the paper I don’t study in detail properties of my estimators or how to improve it, but use simple plug-in techniques. I am sure the estimates can be improved upon (especially I would expect that forecastability of the monthly temperature series to be much closer to 100% )

Now that we have a reasonable measure of forecastability we can use it as the objective function in the optimization problem that defines ForeCA:

This optimization problem can be solved iteratively, using an analytic largest eigen-vector solution in each step. Voila, this is ForeCA! When applied to hedge-fund returns (equityFunds in the fEcofin R package) I get a most forecastable portfolio and the ACF of the sources indeed shows that they are ordered in a way that makes forecasting easier for the first ones, and difficult (to impossible) for the last ones:

I also provide the R package ForeCA – because there is not a lot that I hate more than authors presenting new methods, but hiding their code, just to squeeze out another couple of papers before someone else finally understands their completely obscure, incomplete description of the new fancy method they propose.

All good things come in threes: 3rd time student paper competition winner (JSM 2012)

Driven by my competitive side I digged up a manuscript hidden for a long time on my hard drive entitled Testing for white noise against locally stationary alternatives. After some days polishing it, I submitted it to the 2012 JSM student paper competition held by the Section of Statistical Learning and Data Mining, sponsored by the journal with the same name (SAM). And to my – positive – surprise it was selected as one of five winners – just like last year and 2007.

San Diego here I come.

Update: pdf at academia.edu. A more polished updated version has been published in SAM.

Oops I did it again: winner of the JSM 2011 student paper competition

My paper “A Frequency Domain EM Algorithm to Detect Similar Dynamics in Time Series with Applications to Spike Sorting and Macro-Economics” was selected as (one out of three) major winners in the JSM 2011 student paper competition on Statistical Learning and Data Mining. Arxiv: 1103.3300.

This is the second time after my 2007 JSM award on the time varying long memory paper.