Deep Learning for Algorithmic Trading

Deep neural networks (DNNs) are powerful types of artificial neural networks (ANNs) that use several hidden layers. They have recently gained considerable attention in the speech transcription and image recognition community (Krizhevsky et al., 2012). This project has investigated various aspects of deep learning for algorithmic trading. In 2015, with Diego Klabjan (NWU) and Jin Hoon Bang (NWU), we developed the first deep learning model for classifing directional movements across several futures markets. The model is both compute and data intensive and was implemented on Intel's Xeon Phi, a 61 core CPU. The work appeared in Algorithmic Finance and ACM conference proceedings. The C++ code and synthetic data is available under the software link. The research has been developed further for limit order book data. With Nick Polson (Booth) and Vadim Sokolov (GMU), we developed spatio-temporal DNNs for price prediction from the limit order book of futures markets. Computational details of the LSTM/RNN models and TensorFlow source code is available with sample LOB data under the software tab. By far the most challenging aspect of applying machine learning is not the prediction, but the trading strategy specification. This project has developed a probabilistic framework for using the confusion matrices as an input to the expected P&L, with examples of avoiding adverse price selection in market making, the details of which appear in High Frequency. More details.

Machine Learning and Econometrics

Machine learning is a non-parametric extension of statistical inference which treats the data generation as being unknown and instead introduces regularization to minimize the Kullback-Leibler divergence. Information criteria are well known in econometrics and this project is developing new regularization information criteria for machine learning based time series models. The research is expected to produce new theory and regularization approaches for machine learning in finance. Joint work with Tyler Ward (Google, NYU) and Zhibai Zhang (NYU). More details.

Algorithms for Computational Finance

Price and risk models of financial derivatives are often numerically intensive and difficult to calibrate. This research project is assessing the extent to which computational design frameworks enable quants to express a variety of simulation based and deterministic approximation approaches in terms of a small number of specific algorithms which scale on many-core CPUs. The project is exploring developments in statistical computing, mathematical software and computer engineering. This research is funded by Intel Corp. More details.

Preconditioned Krylov Subspace Methods for Water Resource Simulations

Postdoctoral research with Prof. Zhaojun Bai (CS, UC Davis). We investigated the effect of scaling PGMRES solvers applied to integrated water resource management models developed by the California State Department of Water Resources. We found that row equilibration is important for sharpening upper bound estimates of the forward error. Sharp control of the forward error enables the residual stopping tolerance to be more efficiently choosen, avoiding excessive iterations and leading to a factor of 7x speedup against their baseline implementation. More details.

Geometric Integrators for Continuum Dynamics

Geometric numerical methods seek to transfer powerful concepts in geometric mechanics to computational continuum dynamics by preserving geometric structure. This doctoral research with Prof. Darryl Holm (Imperial College) and Prof. Sebastian Reich (Potsdam University) developed a unified computational framework for deriving novel geometric integrators for continuum dynamics, with applications in geophysical flows, pseudo-rigid bodies, elastic rods and many other dynamical systems. More details.