Informs Annual Meeting 2017

TB77

INFORMS Houston – 2017

TB75

TB76

372D Nonlinear Optimization Algorithms Sponsored: Optimization, Nonlinear Programming Sponsored Session Chair: Frank Edward Curtis, Bethlehem, PA, 18015, United States, frank.e.curtis@gmail.com Co-Chair: Andreas Waechter, Northwestern University, Evanston, IL, 60208, United States, waechter@iems.northwestern.edu 1 - Failure of the Gradient Method on Nonsmooth Convex Functions Motivated by the success of the BFGS method on nonsmooth functions, along with the difficulty of analyzing this phenomenon, we consider the behavior of the simpler gradient method on nonsmooth problems. We give an analysis of the gradient method with an Armijo-Wolfe inexact line search on the function $f(u,v)=a|u|+v$ where $a > 0$. We show that if a certain constant $C$, depending only on $a$ and the Armijo parameter, is positive then the method converges to a point $(0, \bar v)$, although $f$ is unbounded below. In contrast if $C \le -0.5$ then the points generated are unbounded below. Experimental results demonstrate that our analysis is reasonably tight. 2 - Logical Benders Decomposition for Quadratic Programs with Complementarity Constraints Francisco Jara-Moroni, Northwestern University, 2145 Sheridan Road, Evanston, IL, 60208, United States, franciscojaramoroni2013@u.northwestern.edu, Andreas Waechter, John E. Mitchell, Jong-Shi Pang We study a logical Benders decomposition approach to solving binary-constrained quadratic programs with linear complementarity constraints. It is based on a satisfiability master problem to which feasibility cuts are added that are computed from primal and dual subproblems for chosen complementarity pieces and binary variables. Interpreting the logical Benders decomposition approach from a branch-and-bound point of view, we propose several new methods for strengthening the feasibility cuts and guiding the master problem solution. Their efficiency is assessed by numerical experiments. 3 - An Algorithm for Optimization Problems with Joint Chance Constraints using a Nonlinear Value-at-risk Formulation Alejandra Pena-Ordieres, Northwestern University, Evanston, IL, United States, alejandra.pena@u.northwestern.edu, Andreas Waechter In this work, we are presenting a nonparametric reformulation of an optimization problem with chance constraints. The probabilistic constraints will be redefined as the Value-at-Risk (VaR) and a smoothing technique will be applied on the historical empirical quantile via a Kernel. This method results in an approximation of the quantile function that reduces the nonconvexity of the feasible region derived from the empirical quantile approximation. Some computational results are included. 4 - A Trust Funnel Algorithm for Equality Constrained Nonconvex Optimization Mohammadreza Samadi, PhD Candidate, Lehigh University, Bethlehem, PA, 18015, United States, mos213@lehigh.edu, Frank E. Curtis, Daniel Robinson A method is proposed for solving equality constrained nonlinear optimization problems. The method employs a trust funnel approach consisting of two phases: a first phase to locate an $\epsilon$-feasible point and a second phase to seek optimality while maintaining $\epsilon$-feasibility. A two-phase approach of this kind based on a cubic regularization methodology was recently proposed, which completely ignores the objective function in the first phase. Our method achieves the same worst-case iteration complexity, but with a first phase that also accounts for improvements in the objective function. As such, the method requires fewer iterations, as the numerical experiments demonstrate. Azam Asl, NYU, 251 Mercer St., New York, NY, 10012, United States, aa2821@nyu.edu, Michael L.Overton

372E 10:30 - 11:15 Responsive Learning Technologies/11:15- 12:00 FICO Invited: Vendor Tutorial Invited Session 1 - Online Games to Teach Operations and Supply Chain Management Samuel C.Wood, Responsive Learning Techologies, 4546 El Camino Real, #243, Los Altos, CA, 94022, United States, wood@responsive.net Littlefield Technologies, the Supply Chain Game, and the Sourcing Game are online competitive assignments used to teach topics including process analysis, inventory control, supply chain management, and sourcing and purchasing. We will describe the games’ learning objectives, typical assignments, and actual game results. 2 - How to Deploy Your Analytic Models to Empower Non-Technical Business Users James Williams, FICO, Roseville, MN, United States, JWilliams@fico.com You have a team with great analytics background. They have developed advanced analytical tools using Python, R, or with your current traditional optimization solver. They have derived crucial insights from your data, and figured out how your decisions shape your customers’ behaviors. Now it’s time to put these critical analytical insights in the hands of your non-technical business users. In this tutorial, we will cover how FICO’s Optimization Suite (including Xpress-Mosel, Xpress-Workbench, and Xpress-Insight) make it possible to embed your analytic models in user-friendly business-user facing applications. Learn how you can supercharge your analytic models with simulation, optimization, reporting, what- if analysis and agile extensibility. 372F New Results for Coordinate Descent and Related Methods for Convex Optimization Sponsored: Optimization, Linear and Conic Optimization Sponsored Session Chair: Robert Michael Freund, MIT, Cambridge, MA, 02139, United States, rfreund@mit.edu Co-Chair: Haihao Lu, MIT, Cambridge, MA, 02142, United States, haihao@mit.edu 1 - Pathwise Coordinate Optimization for Nonconvex Sparse Learning Tuo Zhao, Georgia Tech, Atlanta, GA, United States, tourzhao@gatech.edu Nonconvex optimization arises in many machine learning problems. Although classical optimization theory has shown that nonconvex optimization is computationally intractable, practitioners have proposed numerous heuristic algorithms, which achieve outstanding performance in real-world applications. To bridge this gap between practice and theory, we propose a new generation of model-based optimization algorithms and theory, which incorporate the statistical thinking into modern optimization. We exploit hidden geometric structures behind nonconvex optimization problems, and obtain global optima with desired statistics properties in polynomial time with high probability. 2 - Relatively-continuous Convex Optimization via Mirror Descent Algorithm Haihao Lu, MIT, 60 Wadsworth St, Apt.14E, Cambridge, MA, 02142, United States, haihao@mit.edu The usual approach to developing and analyzing first-order methods for non- smooth convex optimization assumes that the objective function is uniformly Lipschitz continuous. However, in many settings the non-differentiable convex function is not uniformly Lipschitz continuous. Herein we develop a notion of ``relative continuity’’ that is determined relative to a user-specified ``reference function’’ (that should be computationally tractable for algorithms), and we show that many non-differentiable convex functions are relatively continuous with respect to a correspondingly fairly-simple reference function. We use mirror descent algorithm for solving problems in this setting. TB77

331

Made with FlippingBook flipbook maker