Informs Annual Meeting 2017

SD76

INFORMS Houston – 2017

SD74

generalization of Finito and MISO. We present many applications that can benefit from PPG and S-PPG and prove convergence for both methods. A key strength of S-PPG is its ability to directly handle a large sum of non-smooth non-separable functions with a constant stepsize independent of the number of functions. 2 - When Cyclic Coordinate Descent Outperforms Randomized Coordinate Descent Nuri Denizcan Vanli, Graduate Student, Massachusetts Institute of Technology, Cambridge, MA, 02141, United States, denizcan@mit.edu, Mert Gurbuzbalaban, Asuman Ozdaglar Coordinate Descent (CD) method is a classical optimization algorithm that has seen a revival of interest. The state-of-the-art convergence rate estimates suggest randomized coordinate descent (RCD) performs better than cyclic coordinate descent (CCD), although numerical experiments do not provide clear justification for this comparison. We provide problem classes for which CCD is faster than RCD in terms of asymptotic worst-case convergence. We present lower and upper bounds on the amount of improvement on the rate of CCD relative to RCD. We also provide a characterization of the best deterministic order in terms of the combinatorial properties of the Hessian matrix of the objective function. 3 - Multi-agent Constrained Optimization of a Strongly Convex Function over Time-varying Directed Networks Erfan Yazdandoost Hamedani, Pennsylvania State University, 310 Leonard Building, Pennsylvania State University, University Park, PA, 16802, United States, evy5047@psu.edu, Necdet Serhat Aybat We consider cooperative multi-agent consensus optimization problems over time- varying directed communication networks. The objective is to minimize the sum of agent-specific composite convex functions subject to agent-specific conic constraint sets. Assuming the sum function is strongly convex, we propose a distributed primal-dual algorithm, DPDA-TV. While each agent’s function can still be merely convex, we provide O(1/k^2) convergence rates for suboptimality, infeasibility and consensus violation of the ergodic iterate sequence in the iteration number k where the k-th iteration of DPDA-TV requires O(log(k)) local communication rounds with the neighboring nodes. 4 - Zeroth Order Nonconvex Multi Agent Optimization Davood Hajinezhad, Iowa State University, 62 B.Schilletter village, Ames, IA, 50010, United States, dhaji@iastate.edu, Mingyi Hong, Alfredo A.Garcia We consider distributed optimization problems over a multi-agent network, where each agent can only partially evaluate the objective function, and it is allowed to exchange messages with its immediate neighbours. Differently from all existing works, our focus is given to optimizing a class of difficult non-convex problems, and under the challenging setting where each agent can only access the zeroth-order information of its local functions. For different types of network topologies, we develop a number of efficient distributed algorithms and rigorously analyze their convergence and rate of convergence. 372E Optimization Society Award Session Sponsored: Optimization Sponsored Session Chair: David Morton, Northwestern University, IEMS Department, 2145 Sheridan Road, Evanston, IL, 60208, United States, Robert J.Vanderbei, Princeton University, Operations Research & Financial Engineering, 209 Sherrerd Hall, Princeton, NJ, 08544, United States, rvdb@princeton.edu This prize will be awarded with a short presentation, that is not available. 2 - Farkas Prize Recipient Kim-Chuan Toh, National University of Singapore, Singapore, Singapore, mattohkc@nus.edu.sg This prize will be awarded with a short presentation, that is not available. 3 - Young Researchers Prize Recipients Alberto Del Pia, University of Wisconsin–Madison, Madison, WI, United States, delpia@wisc.edu, Aida Khajavirad This prize will be awarded with a short presentation, that is not available. 4 - Student Paper Prize Recipient Frans de Ruiter, Tilburg University, H.R.Holststraat 48, Tilburg, 4103 VB, Netherlands, f.j.c.t.deruiter@uvt.nl This prize will be awarded with a short presentation, that is not available. david.morton@northwestern.edu 1 - Khachiyan Prize Recipient SD76

372C Emerging Computational Optimization Techniques Sponsored: Computing Sponsored Session Chair: Chris Lourenco, Texas A&M University, clouren@tamu.edu 1 - Roundoff-Error-Free Framework for the Exact Solutions of Sparse Linear Systems Christopher J.Lourenco, Texas A&M.University, Dept. of Industrial and Systems Engineering, 3131 Tamu, College Station, TX, 77843, United States, clouren@tamu.edu, Adolfo Raphael Escobedo, Erick Moreno-Centeno, Timothy A.Davis LU factorizations are the key tool used to solve the sparse systems of linear equations that arise in many classes of mathematical programs. In many documented cases however, roundoff errors accrued during the implementation of these factorizations can cause solvers to output infeasible or suboptimal solutions, especially for numerically unstable instances. To address this, we developed an exact left-looking LU framework to solve sparse linear systems in which all operations are performed entirely in integer arithmetic. Also presented are computational results, which show that the novel left-looking LU framework significantly outperforms modern state-of-the art sparse exact solvers. 2 - Optimization for L1-norm Error Fitting via Data Aggregation Young Woong Park, Iowa State University, Ames, IA, United States, ywpark@iastate.edu We propose a data aggregation-based algorithm with monotonic convergence to a global optimum for a generalized version of the L1-norm error fitting model. The proposed algorithm can solve multi-dimensional fitting problems with arbitrary constraints on the fitting coefficients. Any model following the form can be solved optimally using the proposed algorithm. The generalized problem includes popular models such as regression, principal component analysis, and the orthogonal Procrustes problem. The results of the computational experiment show that the proposed algorithms are faster than the state-of-the-art benchmarks for the problems and data sets studied. 3 - Mining High-Quality Solutions to Learn Effective and Interpretable Heuristics Michele Samorani, Santa Clara University, 500, El Camino Real, Santa Clara, CA, 95053, United States, msamorani@scu.edu, Manuel Laguna We propose a method to automatically design effective and interpretable heuristic algorithms. Our method employs data mining to analyze the optimal solutions of a training set of problem instances. Then, it finds an interpretable rule that can be applied at each step of a constructive algorithm to tackle new instances. We test our method on a variety of binary optimization problems, such as the knapsack and the max-cut problems. 4 - Improved Basic Procedure for Chubanovs Algorithm Carlos Deck, University of California-Berkeley, 2100 Channing Way, Apartment 316, Berkeley, CA, 94704, United States, cgdeck@berkeley.edu This work improves Chubanov’s projection algorithm for the linear feasibility problem of finding a non-negative vector in Ker(A). This is done by working in the original space of the x-variables instead of solving the alternative system. By changing the solution space, we can focus on solutions within the unit simplex instead of the unit hypercube, which strengthen the cuts provided by Chubanov and Roos. We show how to perform the rescaling in this setting, given that the feasible region is no longer homogeneous. Our computational experiments show a significant improvement compared to Chubanov and Roos’ variants of this algorithm in practice. 372D Distributed Optimization Sponsored: Optimization, Nonlinear Programming Sponsored Session Chair: Necdet Serhat Aybat, University Park, PA, 16802, United States, nsa10@psu.edu 1 - Proximal-proximal-gradient Method Ernest Kang Ryu, UCLA, 10910 Wellworth Ave., Apt. 310, Los Angeles, CA, 90024, United States, ernestryu87@gmail.com We present the proximal-proximal-gradient method (PPG), a novel optimization method that is simple to implement and simple to parallelize. PPG is applicable to minimization problems written as a sum of many differentiable and possibly non- differentiable convex functions. We furthermore present a related variation, which we call stochastic PPG (S-PPG). S-PPG can be interpreted as a SD75

127

Made with FlippingBook flipbook maker