Informs Annual Meeting Phoenix 2018

INFORMS Phoenix – 2018

TA08

n TA08 North Bldg 124A Network Optimization and Graph Algorithms II Sponsored: Optimization/Network Optimization Sponsored Session Chair: Gino J. Lim, University of Houston, Professor and Chairman, Dept of Industrial Engineering, Houston, TX, 77204, United States 1 - Generating Directed Networks with Predefined Degree Distribution and Directed Clustering Coefficient Alex Marshall, Analyst, The Perduco Group, Beavercreek, OH, 45431, United States, Paul F. Auclair Experimental designs that require the generation of directed networks with predefined clustering are often limited to the undirected clustering coefficient. This presentation reviews an adaption and extension of the Kashyap-Ambika Degree Preserving Rewiring (DPR) techniques for tuning clustering in directed networks. The adaptation enhances the DPR techniques by facilitating downward, in addition to upward, adjustment of directed clustering. The extension integrates the DPR techniques and adaptions enabling the systematic generation of directed networks with a predefined degree distribution and directed clustering coefficient. 2 - A Bi-objective Covering Tour Problem Sanaz Goldani, PhD Student, North Carolina State University, Raleigh, NC, 27606, United States, Yahya Fathi We introduce several families of valid inequalities for the constrained Covering Tour Problem and discuss appropriate strategies for incorporating these inequalities in the context of a branch and cut algorithm for solving the corresponding IP model. We then employ this algorithm to solve a corresponding bi-objective covering tour problem and present the result of a limited computational study. 3 - The Team-orienteering Problem with Diversity Constraints: Application to the Travel Industry Yanlu Zhao, ESSEC Business School, Paris, France, Laruent Alfandari This paper examines how the online travel agencies (OTAs) are re-structuring human modern lifestyle. We formulate the tour plan process as a team orienteering problem (TOP) with restriction on budget and duration. Different from the TOP, we are the first to consider the diversity in tour design and provide a diversified tour package rather than multiple independent tours. Such diversity enables us to explore the tradeoff between personal preferences in customer choices and economies of scale in agency bargain power. Further, we reformulate the model as a tractable master problem and solve it by column generation. Finally, we use a real dataset to test the algorithm and explore the effect of diversity. 4 - An Integrated Problem of Hub Location and Revenue Management with Network Disruptions Yanting HOU, Tongji University, 1500 Siping Road, Shanghai, 200092, China We maximize the total profits in an integrated problem of star capacitated single allocation p-hub location problem and revenue management taking into account disruptions and customer segmentation considerations. The problem is to decide on the locations of hubs and their capacities at the risk of disruptions, the connections, the booking limits and the protection levels of tickets on multiple fare classes. 5 - A Consistent Network Complexity Measure and its Application in Logistics Network Design Yunhui Lin, National University of Singapore, Engineering Drive 2, Blk E1 #07-21, Singapore, Singapore, Yuan Wang, Loo Hay Lee The problem to quantify network complexity has been a challenging topic in various disciplines. Most of the network complexity measures are not consistent and some of them are even counter-intuitive. We exam the consistency of some entropy-based network complexity measures for directed graph, and check whether these measures agree with our intuition. We finally propose a consistent network complexity measure and shows how to apply it as a decision making tool and how to integrate into optimization models in logistics network designs. 6 - A Bi-objective Optimization Model Toward a Robust Network Partitioning Gino J. Lim, University of Houston, Professor and Chairman, Dept of Industrial Engineering, Houston, TX, 77204, United States, Saeedeh Abbasi, Masoud Barati Dealing with large network-structured systems is difficult. Therefore, a parallel processing is recommended by partitioning the network; which can facilitate the process by reducing the size of the target network at each moment. The common partitioning criterion is modularity while considering another metric beside is beneficial to the result. This study addresses the network partitioning challenges in vulnerability of the partitions via maximization of edge-connectivity beside the modularity. The problem is formulated as a bi-objective maximization model. Two case studies of random graphs in size of 6- and 40-node are analyzed to demonstrate the model’s performance.

n TA09 North Bldg 124B

Nonconvex and Stochastic Optimization Sponsored: Optimization/Nonlinear Programming Sponsored Session Chair: Mert Gurbuzbalaban, Massachusetts Institute of Technology, Cambridge, MA, 02139, United States Co-Chair: Nuri Denizcan Vanli, Massachusetts Institute of Tehnology, Cambridge, MA, 02141, United States 1 - Beyond Gradient Methods: Models in Stochastic and Non-convex Optimization We consider minimization of stochastic functionals that are (potentially) non- smooth and non-convex, but are locally approximable (in a sense we make precise) by convex functions. This class includes compositions of a (potentially) non-smooth convex function h and smooth function c. We develop methods that build local models of the problem and show theoretically and empirically that they have good performance, typically substantially better than subgradient methods. We analyze this problem class in the context of generic nonlinear measurement problems, showing that we can solve these problems (even with faulty measurements) with high probability under appropriate measurement models. 2 - Implicit Bias of Optimization in Learning Suriya Gunasekar, Toyota Technological Institute at Chicago, Chicago, IL, United States, Jason Lee, Nathan Srebro, Daniel Soudry In optimization problems with multiple non-equivalent global minima, we look at the question of how different optimization algorithms influence the specific global minimum they converge to? We will focus on underdetermined regression or separable classification tasks in machine learning, where the training loss has multiple global minima that all give zero training loss, but different minima have different performance on test data. In such problems we will see cases where the specific global minimum reached by specific optimization algorithms have simple characterizations in terms of the geometry of the algorithm and independent of hyperparameter choices such as step size and momentum. 3 - Robust Accelerated Gradient Method Alireza Fallah, Massachusetts Institute of Technology, 77 Massachusetts Ave., Room 32-D640, Cambridge, MA, 02139, United States, Mert Gurbuzbalaban, Asuman Ozdaglar, Necdet Serhat Aybat We study the trade-off between rate of convergence and robustness to gradient errors in designing the first-order algorithms. In particular, we focus on standard optimization algorithms such as gradient descent (GD) and Nesterov’s accelerated gradient (AG) method for strongly convex objectives when the gradient has random errors in the form of additive white noise. In this work, we develop a tractable algorithm that allows us to set the parameters of each algorithm to achieve a particular trade-off between rate and robustness. In addition, our results show that AG can achieve acceleration while being more robust to random gradient errors. 4 - Spurious Local Minima in Neural Networks: A Critical View Chulhee Yun, Massachusetts Institute of Technology, Cambridge, MA, United States, Suvrit Sra, Ali Jadbabaie We investigate the loss surface of nonlinear neural networks. We prove that even for networks with one hidden layer and “slightest” nonlinearity, there can be spurious local minima. Our results thus indicate that in general “no spurious local minima” is a property limited to deep linear networks. Specifically, for ReLU(- like) networks we prove that for almost all practical datasets there exist infinitely many local minima. We also present a counterexample for more general activation functions, for which there exists a local minimum strictly inferior to the global minimum. Our results make the least restrictive assumptions relative to the existing results on local optimality in neural networks. 5 - A Stochastic Trust Region Algorithm with Careful Step Normalization Rui Shi, Lehigh University, Lehigh University, Bethlehem, PA, United States, Benjamin Field Hobbs In this talk, we are going to present a stochastic trust region algorithm with emphasis on the normalized steps. We derive the method from a stochastic trust region subproblem but we will also show that if we adapt the trust region sub- problem directly, we are going to lose the convergence. Hence our approach involves a modified update scheme which we show convergence for both convex and nonconvex objectives. We will also provide numerical example to show that our method outperform SG in convex and nonconvex machine learning problems. John Duchi, Department of Statistics - 390 Serra Mall, Stanford University, Stanford, CA, 94305, United States

246

Made with FlippingBook - Online magazine maker