Informs Annual Meeting Phoenix 2018
INFORMS Phoenix – 2018
TB05
2 - Landscape of Neural Networks for Binary Classification Ruoyu Sun, University of Illinois, Urbana-Champaign We study the landscape of neural networks for binary classification and provide conditions under which the training error is zero at all local minima. Our results provide a few necessary and sufficient conditions; fo example, for ReLU and sigmoid activation functions bad local minima exist, while for strictly increasing and convex functions bad local minima disappear. Other conditions include: the neural network should either be single-layered or is multi-layered with a shortcut-like connection, and the surrogate loss function should be a smooth version of hinge loss. 3 - Foundations of Nonconvex and Nonsmooth Robust Subspace Recovery Tyler Maunu, University of Minnesota We present a mathematical analysis of a nonconvex energy landscape for Robust Subspace Recovery. We develop generic conditions that ensures recovery of an underlying subspace in corrupted datasets. We further show that if the generic condition is satisfied, a geodesic gradient descent method over the Grassmannian manifold can exactly recover the underlying subspace. We also examine the recovery limits in the case of arbitrary outliers. 4 - Leave-one-out Analysis for Convex and Nonconvex Matrix Completion Yudong Chen, Cornell University, 223 Frank H.T. Rhodes Hall, School of ORIE, Ithaca, NY, 14853, United States, Lijun Ding We introduce a powerful technique, Leave-One-Out, to the analysis of matrix completion problems. This technique allows us to obtain entry-wise bounds for iterative stochastic procedures. We demonstrate its power in analyzing two quintessential algorithms for matrix completion: the non-convex approach Singular Value Projection (SVP), and the convex relaxation approach Nuclear Norm Minimization (NNM). For SVP, we prove for the first time that the unmodified form of this algorithm, without sample splitting, converges linearly in infinity norm. For NNM, we study its dual solution and establish the first sample complexity bound that depends optimally on the matrix dimension and condition number. n TB06 North Bldg 122C Joint Session OPT/Practice Curated: Convex Optimization and Applications to Machine Learning Sponsored: Optimization/Global Optimization Sponsored Session Chair: Georgina Hall, Princeton University, Princeton, NJ, 08544, United States 1 - Time-varying Semidefinite Programs Amir Ali Ahmadi, Princeton University, Dept. of Operations Research&Financial Eng., Sherrerd Hall (room 329), Charlton Street, Princeton, NJ, 08544, United States, Bachir El Khadir Time-varying semide?nite programs (TV-SDPs) are semide?nite programs whose feasible set and objective function vary continuously over a bounded time interval and whose solution is a bounded measurable function of time. We show that the best polynomial solution of a given degree to a TV-SDP can be found by solving an SDP of tractable size. We also prove that under mild assumptions, polynomial solutions are as good as any bounded measurable solution. Joint work with Bachir El Khadir. 2 - Nonnegative Polynomials and Applications to Learning Georgina Hall, INSEAD Business School, Boulevard de Constance, Sherrerd Hall, Fontainebleau, 77300, France In this talk, we show how techniques from sum of squares optimization can be applied to two problems at the interface of machine learning and polynomial optimization. In part (i), we study the problem of learning a monotone polynomial from data. This is motivated by regression problems where the underlying function to be learned is monotone (consider, e.g., the price of a car as a function of its fuel efficiency). In part (ii), we study the problem of optimally decomposing a multivariate polynomials as the difference of two convex polynomials. This is motivated by certain majorization-minimization algorithms used in nonconvex optimization that require such a decomposition. 3 - Design of First-order Optimization Algorithms via Sum-of-squares Programming Mahyar Fazlyab, University of Pennsylvania, Philadelphia, PA, United States, Manfred Morari, Victor M. Preciado We use tools from sum-of-squares programming to design iterative first-order optimization algorithms for smooth and strongly convex problems. Our starting point is to develop a polynomial matrix inequality as a sufficient condition for exponential convergence of the algorithm. We then formulate a polynomial optimization, in which the objective is to optimize the exponential decay rate over the tunable parameters of the algorithm (e.g. stepsize, momentum coefficient, etc.). Finally, we use sum-of-squares programming as a tractable relaxation of the proposed polynomial optimization problem. We illustrate the utility of the proposed framework by designing an accelerated method.
n TB04 North Bldg 122A Extended Formulations in Integer Programming and Combinatorial Optimization Sponsored: Optimization/Integer and Discrete Optimization Sponsored Session Chair: Gonzalo Muñoz, Polytechnique Montr al, Montr al, QC, H3T 1N8, Canada 1 - Extended Formulations for Unstructured Integer Programs Akshay Gupte, Clemson University, Dept. of Mathematical Sciences, O-321 Martin Hall, Clemson, SC, 29634-0975, United States, Amin Khademi, Andrew J. Schaefer We present a systematic procedure for constructing extended formulations for convex hulls of pure integer programs. Our technique relies on only the data for the LP relaxation, and does not depend on any particular structure of the IP feasible set. A key step in our construction is the use of subadditive duality for integer programming. If the original LP is represented by a totally dual integral system with integral right hand side, then our extension is exactly the LP polyhedron, as desired. Finally, we apply our result to packing and covering problems. 2 - LP & SDP Hierarchies: How to Schedule? Víctor Verdugo, Universidad de Chile & Ecole normale supérieure, Santiago, Chile Convex hierarchies are systematic approaches to strengthen a relaxation. In this talk we will focus on how to construct extended formulations using LP and SDP hierarchies for the problem of scheduling identical machines starting from the configuration linear program, and how they fail to obtain gaps arbitrarily close to one even when they are exponentially sized. We then provide some ideas of how we can overcome this negative result in the context of identical machines and how to construct a new family of polynomial sized relaxations with gap arbitrarily close to one. 3 - Formulations for Spanning Tree (and Related Problems) in Planar Graphs Hamidreza Validi, Oklahoma State University, Stillwater, OK, 74075, United States, Austin Buchanan In a 2002 Networks paper, Williams proposed an IP formulation for spanning trees of a planar graph. The formulation is remarkably small (using only O(n) variables) and remarkably strong (defining an integral polytope). In this talk, we show that Williams’ formulation is incorrect as stated, providing a binary feasible solution that does not represent a spanning tree. We characterize when this can happen and show how to fix the formulation. We also propose generalizations of the fixed formulation for related problems. 4 - Treewidth-based Extension Complexity Lower Bounds Gonzalo Muñoz, Polytechnique Montr al, Montr al, QC, Canada, Yuri Faenza, Sebastian Pokutta In this work, we study the extension complexity of 0/1 sets parametrized by treewidth: a graph-theoretical parameter that measures structured sparsity. If a 0/1 set can be formulated as the set of binary vectors that satisfy a certain system of constraints, and those constraints present a sparsity pattern whose treewidth is k, then it is known that the extension complexity of the convex hull of the set is O(n2^k). The goal of this work is to prove the existence of 0/1 sets that (nearly) meet this bound, for any arbitrary treewidth level k. To the best of our knowledge, this is the first work to provide parametric lower bounds on extension complexity based on the treewidth. n TB05 North Bldg 122B Recent Developments and Methods in Nonconvex Optimization Sponsored: Optimization/Linear and Conic Optimization Sponsored Session Chair: Brendan Ames, University of Alabama, Tuscaloosa, AL, United States 1 - Non-convex Factorization Methods for the K-disjoint-clique Problem Alexander Barnes, University of Alabama Large-scale semidefinite programming has many applications, including optimal control, computer vision, and machine learning. However, current algorithms for solving semidefinite programs (SDPs) can be time consuming and memory intensive. We look at new heuristics for the solutions of SDPs based on non- convex factorization, the augmented Lagrangian method, and alternating minimization. In particular, we will focus on solutions of semidefinite relaxations for the k-clique and k-cluster programs. We will present numerical results illustrating the efficacy of our approach for clustering of real and simulated data.
276
Made with FlippingBook - Online magazine maker