Projects
I plan on using this page to keep a list of broad research areas that is more-or-less up-to-date. I welcome collaboration with anyone (especially students!) so please reach out and email me or stop by my office and talk to me.
Asynchronous and Randomized Linear Solvers
This is a continuation of work I’ve done in the past, looking at ways to accelerate the convergence of (asynchronous) linear solvers using randomization. Currently, I’m interested in figuring out how to dynamically seleted an appropriate probability distribution to sample component updates from, getting a set of convincing experimental results, as well as establishing theoretical bounds on the performance upgrade that might be possible.
A good introduction to some of these ideas is in my paper at the High Performance Computing Symposium in 2019 which is available here
Performance Modeling
Being able to accurately model the performance of High Performance Computing methods is incredibly helpful in deciding how to use (and/or modify) the tools that we have.
In the past, I’ve done work modeling the performance of parallel algorithms and tried to develop simulations that can be used to predict the performance of highly parallel linear algebra routines on a variety of different potential computer architectures.
This current research project (in collaboration with researchers at Old Dominion University) is aimed at creating models that monitor performance in conjunction with power usage and are centered around applications related to machine learning.