Informs Annual Meeting 2017

SC78

INFORMS Houston – 2017

SC77

3 - Linear Convex Underestimators for Dynamic Systems Kamil Khan, PhD, McMaster University, Hamilton, ON, Canada, kamilkhan@mcmaster.ca A new efficient adjoint-based method is presented for computing useful affine underestimators for solutions of parametric ordinary differential equations, to provide bounds to methods for deterministic dynamic global optimization. This method harnesses several recent advances, including a general framework for convex relaxation generation by Tsoukalas and Mitsos, an instance of this framework that yields continuously differentiable relaxations, and a technique by Scott and Barton for describing convex relaxations for ODE systems as solutions of hybrid discrete/continuous systems. 372E Advances in Learning and Optimization Sponsored: Optimization, Integer and Discrete Optimization Sponsored Session Chair: Sebastian Pokutta, Georgia Institute of Technology, Atlanta, GA, 30332-0205, United States, sebastian.pokutta@isye.gatech.edu Chair: Alfredo Torrico, Georgia Institute of Technology, Atlanta, GA, Swati Gupta, Massachusetts Institute of Technology, Cambridge, MA, 02139, United States, swatig@mit.edu, Michel X. Goemans, Patrick Jaillet We study the minimization of separable strictly convex functions over polyhedra. This problem is motivated by online convex optimization methods whose bottleneck relies on the minimization of a (often) separable, convex metric, known as the Bregman divergence. We provide a conceptually simple algorithm, Inc-Fix, in the case of submodular base polyhedra. For cardinality-based submodular polytopes, we show that Inc-Fix can be speeded up to be the state-of- the-art method for minimizing uniform divergences. We show that the running time of Inc-Fix is independent of the convexity parameters of the objective function. 2 - Posterior Sampling for Reinforcement Learning: Worst-case Regret Bounds Randy Jia, PhD Candidate, Columbia University, New York, NY, United States, rqj2000@columbia.edu, Shipra Agrawal We present a posterior sampling aka Thompson sampling based algorithm for the reinforcement learning problem. Our main result is a \tilde{O}(D\sqrt{SAT}) bound on the worst-case (minimax) regret of our approach when the underlying unknown Markov Decision Process (MDP) is communicating with a finite diameter D, finite number of states S and actions A. Here, regret compares the total reward achieved by the reinforcement learning algorithm to the total expected reward of an optimal infinite-horizon undiscounted average reward policy, over a finite time horizon T. 3 - Dynamic Data-driven Estimation of Non-parametric Choice Models Nam Ho-Nguyen, Carnegie Mellon University, 5000 Forbes We study non-parametric models for consumer choice behaviour. These models were introduced to alleviate unreasonable assumptions and issues of suboptimal model fit/selection present in traditional parametric approaches. We present a simple convex optimization-based framework to efficiently learn a non- parametric choice model from data. Our method enjoys provable convergence guarantees and extends naturally to the dynamic observation model. Each iteration of our method, as well as existing approaches from the literature, requires solving a combinatorial subproblem. We examine this in detail and identify conditions under which it can be efficiently solved. 3 - Dynamic Stochastic Approximation for Multistage Stochastic Optimization Zhiqiang Zhou, Georgia Institute of Technology, Atlanta, GA, Contact: zzhoubrian@gatech.edu, Guanghui Lan This paper considers optimizing a multi-stage programing problem with stochastic conic constraints in every stage. We first present a new algorithm, namely Dynamic stochastic approximation (DSA), to handle this optimization problem. We show this algorithm can achieve an optimal rate O(1/\epsilon^4) of convergence in terms of number of random samples when applied to a 3-stage stochastic optimization problem under the assumption that objective function is general convex, and we also show this rate can be improved to O(1/\epsilon^2) when the objective function is strongly convex. Moreover, we present a variant of DSA when applied to an extended multi-stage programming problem.. SC76 United States, atorrico3@gatech.edu 1 - Learning Combinatorial Structures Avenue, Pittsburgh, PA, 15213, United States, hnh@andrew.cmu.edu, Fatma Kilinc-Karzan

372F Novel Approaches in Distributed/Large-scale Optimization Sponsored: Optimization, Computational Optimization and Software Sponsored Session Chair: Ilbin Lee, Georgia Tech, 755 Ferst Dr. NW, Atlanta, GA, 30332, United States, ilee79@gatech.edu 1 - Fast Algorithms for Robust PCA via Gradient Descent Constantine Caramanis, University of Texas, Mail Code C0806, Austin, TX, 78712-0240, United States, constantine@utexas.edu Abstract not available. 2 - Computational Sensitivity Analysis for High-Dimensional Linear Programs Stewart Curry, Georgia Tech, scurry9@gatech.edu Sensitivity analysis is an integral part of decision-making using optimization modeling. Such analysis is particularly important in situations where the input parameters are uncertain. Global sensitivity analysis estimates the variance of the optimal objective value attributable to each input parameter. However, it relies on solving a given optimization problem for randomly generated data points, and methods for doing so efficiently are not studied. In this talk, we will discuss recent efforts to make global sensitivity analysis more computationally efficient and, in doing so, provide insights into the geometry of parametric linear programs. 3 - Communication-Efficient Decentralized Stochastic Optimization Soomin Lee, Georgia Tech, 144 Ponce de Leon Avenue NE, Apt 1608, Atlanta, GA, 30308, United States, soomin.lee@isye.gatech.edu, Guanghui Lan, Yi Zhou Optimization problems arising in decentralized multi-agent systems have gained significant attention combined with privacy preservation and distributed data mining/processing issues. The distributed nature of the problems is inherent due to partial knowledge of the problem data, which necessitates costly communications among neighboring agents. In this talk, we present a new class of decentralized first-order methods for nonsmooth and stochastic optimization problems which can significantly reduce the number of inter-node communications and efficiently solve subproblems through linearizations of their local objective functions. 381A Data-driven Distributed Optimization and Learning for Modern Power Systems Sponsored: Energy, Natural Res & the Environment Electricity Sponsored Session Chair: Andy Sun, Georgia Institute of Technology, Atlanta, GA, 30312, United States, andy.sun@isye.gatech.edu 1 - Deep Learning in Power Markets Joel Macaluso, Texas A&M University, jmacaluso@tamu.edu In this work, we are interested in exploring deep learning techniques and their application to power markets. Power markets are known for extreme price volatilityand sensitivity to random variables such as weather. Electricity dispatch requires planning ahead while taking into account reliability concerns. In this work we apply deep learning to the task. 2 - A Multi-sensor Feedback Estimator for Forecasting Variability in Local Area Photo-voltaic Power Generation Jeff Manning, University of Texas at Austin, Austin, TX, United States, jeff.manning@rastrac.com We propose and investigate the tractability of prediction of surface-driven convective cumulus clouds in a local area by use of a feedback estimator. The estimator senses the spatio-temporal cloud shadow field and uses a Navier-Stokes fluid solution to estimate the surface heating field and ultimately the evolution of the future cloud field which results therefrom. Such clouds have lifetimes ranging from five to thirty minutes, so a predictor must account for cloud evolution in addition to mere horizontal advection. The design is challenged by loss of information due to convective turbulence in the ABL, so we investigate conditions under which the problem may or may not be practically solvable. SC78

95

Made with FlippingBook flipbook maker