Informs Annual Meeting 2017

WC42

INFORMS Houston – 2017

3 - An Incremental Quasi-Newton Method with Local Superlinear Convergence Rate Aryan Mokhtari, University of Pennsylvania, Penn Engineering, 220 South 33rd Street, Philadelphia, PA, 19104, United States, aryanm@seas.upenn.edu, Mark Eisen, Alejandro Ribeiro We present an incremental Broyden-Fletcher-Goldfarb-Shanno (BFGS) method as a quasi-Newton algorithm with a cyclically iterative update scheme for solving large-scale optimization problems. The proposed incremental quasi-Newton (IQN) algorithm reduces computational cost relative to traditional quasi-Newton methods by restricting the update to a single function per iteration and relative to incremental second-order methods by removing the need to compute the inverse of the Hessian. A local superlinear convergence rate is established and a strong improvement is shown over first order methods numerically for a set of common large-scale optimization problems. 4 - A Heuristic Algorithm for Solving Toll Optimization Problems Stan Uryasev, University of Florida-Gainesville, Gainesville, FL, 32611, United States, uryasev@ufl.edu, Viacheslav Kalashnikov, Vladik Kreinovich, Jose G. Flores-Muniz, Nataliya I. Kalashnykova The problem of assigning optimal tolls to the arcs of a multi-commodity transportation network is formulated as a bilevel mathematical program. We describe an algorithm based on the allowable ranges to stay optimal (ARSO) resulting from sensitivity analysis applied to the lower level problem. In this way, one can analyze possible changes in the coefficients of some variables in the objective function within these allowed ranges without affecting the optimal solution. In addition, when stuck to a local maximum solution, the “filled function” technique helps one “jump” to another local maximum (if such does exist) or stop the search. Numerical experiments confirm the robustness of our heuristics. 5 - On the Stability of Distributed Gradient-based Consensus Methods under Communication Delays Motivated by applications in machine learning, we study distributed algorithms for optimization problems defined over a network of processors. The objective function is composed of a sum of local functions where each function is known by only one processor. We study here the convergence rate of a popular distributed gradient algorithm in the presence of network delays, which has been identified as a significant open problem. Our main contribution is to show that convergence occurs at rate O(exp(a)ln(t)/sqrt(t)) where a is the uniform delay between processors. We also provide a lower bound to show that the exponential dependence on delay is inevitable. 360A Organization Theory Contributed Session Chair: Adams Wei Yu, Carnegie Mellon University, Pittsburgh, PA, United States, adamsyuwei@gmail.com 1 - A Three Stage Stochastic Programming Model for Managing EV Charging Stations During Major Power Disruption Mohannad Kabli, Mississippi State University, 21 Crenshaw Lane, Apt. 10, Starkville, MS, 39759, United States, mrk297@msstate.edu, Mohammad Marufuzzaman This study considers the planning and management of charging stations when major disruptions in the power system occur. A three-stage stochastic programming model is presented, where the first-stage decides which charging stations to install and expand power supply units and renewable energy sources. Then, when the disruption occurs in the second-stage, repairs in power system and charging stations take place ahead of the arrival of “panicked” population. Finally, as demand is unveiled, managerial and operational decisions at the charging stations are made in the third-stage. We have used the infrastructure of Washington DC as a testing ground to validate our proposed optimization model. 2 - Memory-efficient Kernel Methods in Online Settings Alec Koppel, U.S. Army Research Laboratory, 2800 Powder Mill Rd., Adelphi, MD, 20783, United States, aekoppel314@gmail.com, Garrett Warnell, Ethan Stump, Alejandro Ribeiro Popular perception is that kernel methods do not scale to streaming data due to an intractable growth in the amount of storage they require. To solve this problem in a memory-affordable way, we consider functional stochastic gradient descent in tandem with supervised sparsification based on greedy function subspace projections. The method, parsimonious online learning with kernels (POLK), provides a controllable tradeoff between its solution accuracy and memory requirements. We establish almost sure convergence of POLK as well as bound its memory requirements. We evaluate POLK for several learning tasks, and demonstrate a state of the art trade off between test accuracy and sample complexity. WC42 Thinh Doan, University of Illinois, 1308 W Main Street, Urbana, IL, 61801, United States, ttdoan2@illinois.edu, Carolyn L.Beck, Rayadurgam Srikant

3 - On using Markov Chance-decision Processes in Stochastic Programming Sudarshan Rajan, Texas A&M.University, College Station, TX, United States, srajan@tamu.edu, Natarajan Gautam We explore the use of Markov chance-decision process algorithms to solve certain Markov decision processes by analyzing the merits and demerits from a computation and accuracy standpoint. Further, we use the approach in the second stage of two-stage stochastic programs and evaluate its performance. 4 - Reconfigurable Optimization Prateek Raj Srivastava, PhD Student, University of Texas at Austin, 204 East Dean Keeton Street, Engineering Training Center I I.(ETC), Austin, TX, 78712, United States, prateekrs@utexas.edu, Nedialko Dimitrov, David L. Alderson We develop a modeling framework for a class of optimization problems in which operational decisions need to be altered as a system transitions stochastically between states of the world. Our framework addresses the multi-objective problem of minimizing reconfiguration costs associated with changing the operational decisions and ensuring good quality solutions in each individual state. We compare and contrast this framework with existing modeling frameworks like Markov decision processes, stochastic programming, and robust optimization. 5 - Optimal Selection from a Sequence of Relatively Best Objects Mitsushi Tamaki, Professor, Aichi University, Hiraike, Nakamura, Nagoya, 453-8777, Japan, tamaki@vega.aichi-u.ac.jp A known number n of rankable objects appear sequentially in random order. An object is called a candidate if it is relatively best and we are allowed to choose one candidate. The payoff is a(k) if we choose the kth last candidate, k=1, 2, n. We wish to find a selection rule which maximizes the expected payoff. A generalization to multiple choices is also made. 6 - Stochastic Optimization for Deep Neural Networks Adams Wei Yu, PhD Student, Carnegie Mellon University, Pittsburgh, PA, 15213, United States, adamsyuwei@gmail.com In this paper, we introduce an efficient stochastic optimization algorithm to train deep neural network. 360C Remanufacturing Contributed Session Chair: Jinfu Zhang, Tsinghua University, Beijing, China, zhangjf16@mails.tsinghua.edu.cn 1 - Joint Optimization of Production Scheduling and Condition-based Maintenance Planning Bram de Jonge, Dr., University of Groningen, P.O. Box 800, 9700 AV, Groningen, Netherlands, b.de.jonge@rug.nl We consider the joint optimization of production scheduling and condition-based maintenance planning for a single machine that is subject to deterioration and that needs to process incoming jobs of various lengths. We formulate the problem as a Markov decision process (MDP) and determine optimal policies that indicate when to process which job and when to perform maintenance. We provide insights on how production scheduling and maintenance planning interact. 2 - Condition-based Maintenance Policies under the Gamma Degradation Process David Han, University of Texas at San Antonio, Management Science & Statistics, College of Business, San Antonio, TX, 8249-0632, United States, david.han@utsa.edu Condition-based maintenance is an effective method to reduce unexpected failures as well as the O&M costs. This talk discusses the condition-based maintenance policy with optimal inspection points under the gamma degradation process. A random effect parameter is used to account for population heterogeneities and its distribution is continuously updated at each inspection epoch. The observed degradation level along with the system age is utilized for making the optimal maintenance decision, and the structure of the optimal policy is examined. WC44

518

Made with FlippingBook flipbook maker