INFORMS 2021 Program Book
INFORMS Anaheim 2021
SC14
SC13 CC Room 201A In Person: Optimization with Noisy Intermediate- Scale Quantum 1 General Session Chair: Tamás Terlaky, Lehigh University, Bethlehem, PA, 18015-1518, United States 1 - A Quantum Interior Point Method for Sum of Squares Optimization Brandon Augustino, Lehigh University, PA, United States We present a provably convergent quantum interior point method for Sum of Squares (SOS) optimization problems, building on recent advances in quantum linear system solvers. By quantizing an IPM that solves the SOS problem directly, we are able to avoid using semidefinite optimization heirarchies which leads to a better dependence on the dimension of the problem. The quantization of classical interior point methods is the subject of several recent papers in the literature. We compare the theoretical performance of classical and quantum interior point methods with respect to various input parameters. 2 - Large-scale Inference of Sparsely-varying Markov Random Fields We study the problem of inferring time-varying Markov random fields (MRF), where the underlying graphical model is both sparse and changes sparsely over time. Most of the existing methods for the inference of time-varying MRFs rely on the regularized maximum likelihood estimation, that typically suffer from weak statistical guarantees and high computational time. Instead, we introduce a new class of constrained optimization problems for the inference of sparsely- changing MRFs. The proposed optimization problem is formulated based on the exact L0 regularization, and can be solved in near-linear time and memory. Moreover, we show that the proposed estimator enjoys a provably small estimation error. Our proposed method is extremely efficient in practice: it can accurately estimate time-varying graphical models with more than 500 million variables within one hour. 3 - Improving QAOA With Warm-start Initializations and Custom Mixers In this talk, we consider bridging classical optimization techniques with quantum algorithms. We propose using classical warm-starts (obtained via solutions to low- rank semidefinite programming relaxations) in order to initialize the starting state of the Quantum Approximate Optimization Algorithm (QAOA) in the context of the MAX-CUT problem. In addition to changing the initial state, we also consider changing the mixing Hamiltonian in a way that allows us analyze QAOA through the lens of quantum adiabatic algorithms. Our experiments suggest that this modified version of QAOA is robust against quantum noise and is able to yield improved cuts over standard QAOA or the Goemans-Williamson algorithm, even with low-circuit depth and limited training time for most instances. We provide simulation and theoretical results on the performance of the proposed framework. 4 - Inexact Feasible Interior Point Method (IF-IPM) for Linear Optimization (LO) with High Adaptability to Quantum Computers Tamás Terlaky, Lehigh University, Bethlehem, PA, 18015-1518, United States, Mohammadhossein Mohammadisiahroudi, Ramin Fakhini, Luis F. Zuluaga Quantum computing offers Quantum Linear System Algorithms (QLSAs) to solve Newton systems in IPMs. Since QLSAs inherently produce inexact solutions, and their computational complexity heavily depends on the condition number of the Newton System, an IF-IPM is proposed for LO problems using a novel system that allows inexact computation, but warrants feasible steps. We also discuss how QLSAs can be used efficiently in an Iterative Refinement (IR) scheme to find an exact solution efficiently while using QLSAs. Our IF-QIPM enhanced with IR enjoys better time complexity than other quantum and classical IPMs w.r.t the dimension. Salar Fattahi, Assistant Professor, University of Michigan, Ann Arbor, MI, 94702-2147, United States, Andres Gomez Reuben Tate, Georgia Institute of Technology, Atlanta, GA, United States, Bryan Gard, Greg Mohler, Swati Gupta
SC14 CC Room 201B In Person: Statistical Properties of Machine Learning Methods General Session Chair: Namjoon Suh, Georgia Institute of Technology, Atlanta, Georgia, United States 1 - K-class Classification Problem and Properties of Convolutional Neural Nets Trained by Gradient Descent Hyunouk Ko Recent advances in classification problems using deep learning models have yielded state-of-the-art results in various domains. In this talk, I will first review current literature on neural networks that study their optimization and generalization properties along with their limitations such as constraints on data and network architecture. Then, I will discuss k-classification problem using the mostly widely used form of CNN and illustrate its convergence behavior when trained by (stochastic) gradient descent algorithm. I will further consider conditions under which we are in a linear regime,and how we can obtain generalization guarantees that do not rely on uniform convergence bounds. 2 - Asymptotic Theory of 1-Regularized PDE Identification from a Single Noisy Trajectory Namjoon Suh, Georgia Institute of Technology, Atlanta, GA, United States We prove the support recovery for a general class of linear and nonlinear evolutionary partial differential equation (PDE) identification from a single noisy trajectory using 1 regularized Pseudo-Least Squares model ( 1-PsLS). In any associative R-algebra generated by finitely many differentiation operators that contain the unknown PDE operator, applying 1-PsLS to a given data set yields a family of candidate models with coefficients c( ) parameterized by the regularization weight ≥ 0. The trace of {c( )} ≥ 0 suffers from high variance due to data noises and finite difference approximation errors. We provide a set of sufficient conditions which guarantee that, from a single trajectory data denoised by a Local-Polynomial filter, the support of c( ) asymptotically converges to the true signed-support associated with the underlying PDE for sufficiently many data and a certain range of . We also show various numerical experiments to validate our theory. 3 - Analyzing Manufacturing Variation with Complex-Structured Data Anh Tuan Bui, Virginia Commonwealth University, Richmond, VA, 23284, United States, Daniel Apley We present a dissimilarity-based manifold learning framework for discovering unknown systematic variation sources from complex manufacturing data structures. Visualizing individual variation patterns based on the learned manifold provides diagnostic information on the root causes of the variation sources. We discuss two applications of the framework for stochastic textured surfaces (e.g., material microstructures) and unstructured point clouds. 4 - Q-learning for Online Nonparametric Monitoring of High-dimensional Heterogeneous Data Streams Haoqian Li, PhD Student, University of Wisconsin-Madison, Madison, WI, United States High-dimensional data streams are becoming common in various applications nowadays. Meanwhile, the resource constraints often restrict the observability of data streams which poses challenges for statistical process control and quality improvement. In this article, we propose an algorithm based on Q-learning to monitor and quickly detect abnormalities occurring to heterogeneous data streams in the context of limited resources, where only a subset of observations is available at each acquisition time. In particular, we integrate Q-learning with a global threshold learned through a nonparametric cumulative sum (CUSUM) procedure. This algorithm also promotes a wide range of applicability based on the reward scheme. Both simulations and a case study are comprehensively conducted to evaluate the performance and demonstrate the superiority of the proposed method.
19
Made with FlippingBook Online newsletter creator