Beyond Black-Box Scaling: Interpretability, Algorithms, and GPUs
The recent success of Artificial Intelligence is largely attributed to the rise of Deep Learning, a paradigm made possible by the massive parallel computing power of GPUs and the efficiency of first-order optimization. However, while Deep Learning excels at pattern recognition, many critical problems in operations research, healthcare, and finance require certifiably optimal solutions, which are traditionally modeled as Mixed-Integer Programming (MIP) problems. Despite their importance, traditional MIP solvers rely on CPU-based, second-order methods that scale poorly to modern high-dimensional datasets and cannot easily leverage GPU acceleration. In this talk, we take a significant first step toward bridging this hardware-software gap by solving a class of MIP problems, specifically sparse generalized linear models (GLMs), to global optimality using a GPU-friendly framework. We propose a unified proximal first-order framework that achieves provable linear convergence by exploiting novel geometric properties of the perspective relaxation. We demonstrate that our method leverages GPU acceleration to speed up dual bound computations by orders of magnitude, significantly enhancing the capability of Branch-and-Bound frameworks to certify optimality for large-scale problems. Our results show substantial speedups over state-of-the-art commercial solvers like Gurobi and MOSEK, suggesting a new path forward for high-performance optimization in the era of big data.
Bio: Jiachang Liu is an assistant research professor at the Center for Data Science for Enterprise and Society. His research interests include (1) creating interpretable and trustworthy ML solutions for high stakes decisions making, in domains such as healthcare, criminal justice, and finance; (2) designing efficient discrete and continuous optimization techniques to solve related optimization problems, which are usually nonconvex and have a combinatorial nature; and (3) building open-source and user-friendly software packages for the broad data science community. The long-term goal is to let humans and machines seamlessly collaborate and complement each other. Prior to joining Cornell, Liu completed his Ph.D. in electrical and computer engineering at Duke University.