• Professor, Computer Science & Engineering
Tim Davis

Educational Background

  • Ph.D., University of Illinois at Urbana-Champaign, 1989

Research Interests

  • Research Vision: High-Performance Combinatorial Scientific Computing

    The coming decades of high-performance computational mathematics will be increasingly dominated by heterogeneous computing based on hybrid multiprocessors with of a mix of conventional CPU cores and high-throughput GPU cores.  This trend is driven by power constraints and the need for ever-increasing performance.  Although these new heterogeneous systems can deliver high performance with low energy requirements, new algorithms must be developed to exploit them.  The most challenging problems for these systems require a mix of regular and irregular computation.  My research into direct methods for sparse linear systems lies within this problem domain and is also a critical component for many applications in computational mathematics.  

    My experience makes me well-poised for future research in GPU-based algorithms.  As a world leader in algorithmic research for sparse matrix computations, my work combines graph-theoretic methods and numerical techniques to create algorithms for solving problems in computational science that arise across a wide range of applications. I incorporate my novel theory and algorithms into robust library-quality open-source software, which is widely used in industry, academia, and government labs.  In the past decade, I have published more software in the ACM Transactions on Mathematical Software than any other author (9% of the algorithmic output of that journal).

    A primary thrust for my current and future work focuses on creating parallel algorithms for sparse multifrontal LU, QR, and Cholesky factorization for hybrid multicore CPU/GPU multiprocessors.  The workflow in these algorithms is an irregular tree, where the nodes are the bulk of the work (regular operations on dense frontal matrices of varying sizes), and the edges are the irregular assembly of data from child to parent.  The GPUs operate on many frontal matrices at a time, and data is assembled from child to parent frontal matrix without moving it to the CPU.  No other sparse direct method employs these strategies for GPU computing.  By creating widely-used high-performance algorithms and software that encapsulate these ideas, this research will extend the capability of heterogeneous computing to a wide domain of applications in computational science and mathematics.

    In recognition of my research efforts in high-performance computing via GPU-based algorithms, NVIDIA has designated Texas A&M as a CUDA Research Center.

    Connections to Computer Science

    The core of my work lies within the domain of combinatorial scientific computing, which combines combinatorial and graph-theoretic algorithms with numerical methods, and hence relies on nearly all of classical computer science.  My graph and sparse matrix algorithms are important to several core areas in computer science, including social network analysis, computer vision, computer graphics, and robotics.  For example, all of the robotics projects at OpenSLAM.org that rely on sparse matrix algorithms to solve Simultaneous Localization and Mapping problems use my software.  One application relies on my sparse Cholesky update/downdate to process a dynamic set of images collected by a swarm of robots updating a 3D model of the region under exploration.  Robotics provides a promising avenue for future research and applications of my work in applied mathematics and sparse linear algebra. 

    Connections to Mathematics and Operations Research

    I have a long history of creating novel algorithms for new and challenging problems in applied mathematics and optimization techniques.  Over a 16-year collaboration with Bill Hager in the Math Department at the University of Florida, I have created the only asymptotically optimal algorithms for the sparse Cholesky update/downdate problem. This problem arises in dual active set methods in optimization, where the factorization must be updated as the basis set changes. I have also developed multilevel graph partitioning algorithms based on Hager's quadratic-programming formulation.  Optimization presents unique challenges for the development of sparse matrix and combinatoric algorithms, and provides a rich source of promising problems, including low-rank update/downdate of sparse factorizations, rook pivoting, rank deficient matrices, and finding well-conditioned basis sets.

    Connections to Industry, Open-Source Projects, and Government Labs 

    Industry, government labs, and open source projects rely heavily on my work.  My sparse solvers appear in dozens of commercial applications from The MathWorks (x=A\b in MATLAB), Google, NVIDIA, Berkeley Design Automation, Freescale, COMSOL, MSC Software (NASTRAN), Cadence, IBM, Mathematica, ANSYS, Mentor Graphics, and many other companies.  Every photo in Google Street View, Photo Tours, and 3D Earth is placed into proper position using my software via Google's Ceres non-linear least squares solver.  The US Geological Survey uses my sparse solvers to process images of the Earth, Moon, Mercury, and other planetary bodies, cutting their time-to-solution from hours/days down to minutes.  My open-source software has been adopted by all major Linux distributions, and is also widely used in open-source packages such as GIMP (GEGL), R, Octave, FEniCS, ROS (robotics), Boost, Julia, and scilab, to name just a few.  Sandia's Xyce circuit simulation package and Berkeley's OpenSees earthquake simulator both rely on my sparse solvers.  My software consistently demonstrates the speed and reliability essential for industrial-quality software.

     


     Algorithmic Artwork

    My algorithmic artwork has appeared on billboards all over London, as the theme artwork for the London Electronic Arts Festival. To create this art, I use Fourier Transforms, graph algorithms, sparse matrices, and MATLAB to translate music into art, via mathematical rules of my creation.  For more examples, and description of how I create this art, see notesartstudio.com

Awards & Honors

  • Fellow of the Society for Industrial and Applied Mathematics (citation: for contributions to sparse matrix algorithms and software, including the University of Florida Sparse Matrix Collection).
  • Fellow of the Association for Computing Machinery (ACM), class of 2015.
  • Fellow of the Institute of Electrical and Electronics Engineers (IEEE), class of 2016.

Selected Publications

  • See the PDF file posted under my photo ('Full CV') for a complete list of publications.