Informs Annual Meeting Phoenix 2018

INFORMS Phoenix – 2018

MC05

2 - The Benefits of Stochastic Proximal Steps Sebastien Martin, Massachusetts Institute of Technology, Cambridge, MA, 02139, United States, Patrick Jaillet, Dimitris Bertsimas The stochastic proximal point algorithm for convex optimization is an alternative to stochastic gradient steps that solves a convex optimization problem at each step instead of simply computing a gradient. We investigate in this work the type of optimization problems for which this algorithm is competitive with stochastic gradient descent, with a focus on the tractability of applications in machine learning. In particular, we study the impact of mini-batching and of proximal step approximations. 3 - The Online Saddle Point Problem with Applications to Constrained Online Convex Optimization Adrian Rivera, Georgia Institute of Technology, Atlanta, GA, United States, He Wang, Huan Xu Online Convex Optimization (OCO) is a very powerful framework for sequential decision making with plenty of applications in optimization, game theory and machine learning. We generalize the framework of OCO in two ways. First, we explicitly introduce another player into the setting by considering convex- concave games, the goal of the players is to find a sequence of actions such that they are both guaranteed performance close to that of the Nash equilibrium of the sum of the games. We provide an algorithm to solve this problem. Second, we demonstrate the algorithm can be used to solve a variant of OCO were the decisions a player has made in the past affect what it can do now. 4 - Learning Combinatorial Optimization Algorithms Over Graphs Elias B. Khalil, Georgia Tech, 1062 Hemphill Avenue NW, Atlanta, GA, 30318, United States, Hanjun Dai, Yuyu Zhang, Bistra Dilkina, Le Song The design of good heuristics for NP-hard combinatorial problems often requires significant specialized knowledge and trial-and-error. Can we automate this challenging, tedious process, and learn the algorithms instead? In many real- world applications, the same optimization problem is solved on a regular basis, with only slight changes in the data. We propose a unique combination of reinforcement learning and graph embedding to address this challenge. The learned greedy policy behaves like a meta-algorithm that incrementally constructs a solution. Our framework is versatile and has been successfully used to solve instances from Minimum Vertex Cover, Max Cut and Traveling Salesman problems. Conic Optimization in Machine Learning II Sponsored: Optimization/Linear and Conic Optimization Sponsored Session Chair: Somayeh Moazeni, Stevens Institute of Technology,Hoboken, NJ, 07030, United States 1 - Conic Optimization in Sequential Learning Somayeh Moazeni, Stevens Institute of Technology, Babbio Center, 1 Castle Point Terrace on Hudson, Hoboken, NJ, 07030, United States Sequential optimal learning is a sequential decision problem with an exploration- exploitation trade-off. Building on dynamic Bayesian learning and given a learning budget, the procedure identifies effective design elements. This talk discusses conic optimization approaches to solve this problem under a general class of response models with point processes. We show that this optimization problem can be solved by a sequence of linear optimization problems. This approximation is capable to offer close to optimal solutions and significantly reduces the computational time for high-dimensional feature spaces. 2 - Maximizing the Probability of an Arrival Countin Point-process Intensity Control n MC05 North Bldg 122B

n MC03 North Bldg 121C Innovation Contests Sponsored: Technology, Innovation Management & Entrepreneurship Sponsored Session Chair: Ersin Korpeoglu, University College London, London, E14 5AA, United Kingdom 1 - Parallel Innovation Contests C Gizem Korpeoglu, University College London, Gower Street, London, WC1E 6BT, United Kingdom, Ersin Korpeoglu, Isa Emin Hafalir We study innovation contests where multiple organizers seek solutions from agents, and the quality of an agent’s solution depends on her effort and uncertainty. We find that when uncertainty is sufficiently large, organizers benefit from agents’ entry to multiple contests. An organizer’s profit is unimodal in the number of contests, and the optimal number of contests increases with uncertainty. Thus, practitioners who seek innovative (resp., low-novelty) solutions may benefit from running multiple parallel contests and from encouraging (resp., discouraging) agents’ entry to multiple contests. 2 - Dueling Crowdsourcing Contests Konstantinos Stouras, University of Virginia, The Darden School, 100 Darden Boulevard, Charlottesville, VA, 22903, United States, Sanjiv Erat, Kenneth Lichtendahl Solvers’ participation and effort decisions in a contest are not only affected by its design, but they also depend on the design of any competing contests that run in parallel. We provide a theoretical analysis of the equilibrium budget allocation in a game played among competing innovation contests. 3 - When to Involve Inhouse Suppliers in Procurement Contests Zhi Chen, INSEAD, 1 Ayer Rajah Avenue, Singapore, Singapore, Jochen Schlapp, Jurgen Mihm In many purchasing projects, suppliers compete by performing some custom product or technology development regardless of whether they win the project or not. In practice, two competition structures are observed in such procurement contests: (i) all participants are external suppliers; or (ii) one of the suppliers is (partially) owned by the buyer (e.g., through a joint venture). In this study, we analyze when either of these structures is optimal for a buyer seeking a custom innovation. 4 - Optimal Duration of Innovation Contests Ersin Korpeoglu, University College London, 1 Canada Square, London, E14 5AA, United Kingdom, C. Gizem Korpeoglu, Sidika Tunc We study optimal duration and award scheme of an innovation contest where an organizer elicits innovative solutions from agents. Each agent exerts costly effort to improve her solution and faces an output uncertainty. We find that optimal contest duration may increase with novelty and sophistication of solutions that organizer seeks. We show that an organizer with moderate or high urgency in obtaining solutions may adopt winner-take-all award scheme, while an organizer with low urgency may give multiple awards. This may explain why many contests on platforms give multiple awards. Consistent with empirical evidence, we show that optimal duration and optimal total award are positively correlated. Joint Session Integer Programming/OR Frontiers: Machine Learning and Discrete Optimization III Sponsored: Optimization/Integer and Discrete Optimization Sponsored Session Chair: Sebastian Pokutta, Georgia Institute of Technology, Atlanta, GA, 30332-0205, United States Co-Chair: Alfredo Torrico, Georgia Institute of Technology, Atlanta, GA, United States 1 - Distributionally Robust Submodular Maximization Matthew Staib, MIT, Bryan Wilder, Stefanie Jegelka Submodular functions have wide applications, but we often lack direct access to the underlying function f. We focus on stochastic functions given as an expectation of functions over a distribution P. In practice, we have only finite samples fi from P. The standard approach indirectly optimizes f by maximizing ?i fi, ignoring generalization to P. We achieve better performance on the true function f by showing how to perform distributionally robust optimization (DRO) for submodular functions. We give efficient algorithms backed by theoretical guarantees that leverage new contributions to DRO. Our empirical evidence shows DRO improves generalization to the unknown stochastic submodular function. n MC04 North Bldg 122A

Boris Defourny, Lehigh University, Industrial and Systems Engineering, 200 W. Packer Ave, Bethlehem, PA, 18015, United States

We consider the maximization of the probability of attaining a prescribed count of arrivals generated by a point process, by controlling its intensity. We show that the optimal intensity control law has switching times that are affine in the arrival count, and find closed-form expressions for the policy parameters.

185

Made with FlippingBook - Online magazine maker