2016 INFORMS Annual Meeting Program
TD19
INFORMS Nashville – 2016
TD18 106A-MCC Resource Allocation Problems in Nonprofit Settings Sponsored: Public Sector OR Sponsored Session Chair: Gemma Berenguer, Purdue University, 403 W State St, West Lafayette, IN, 47907, United States, gemmabf@purdue.edu 1 - Public Facility Location And Fair Distribution Problem: Fractional Programming Approach Chong Hyun Park, Purdue University, West Lafayette, IN, 47906, United States, park456@purdue.edu, Gemma Berenguer We consider a public facility location problem. A decision maker wants to maximize the aggregated utilities of the service recipients while achieving the fair distribution of distributed items. A bi-objective mixed integer linear fractional programming problem is formulated and solved by various algorithms. 3 - Payment For Results: Funding Non-profit Operations Sripad K Devalkar, Indian School of Business, Hyderabad, India, sripad_devalkar@isb.edu, Milind Sohoni, Neha Sharma We consider the interaction between a donor making voluntary contributions to fund a development project and the non-profit organization (NPO) using the funds to implement the project. With information asymmetry about the NPO’s efficiency and exogenous uncertainty affecting the actual benefit delivered by the project, we show that ex-post funding (payment for results) may not always maximize the donor’s utility even though it helps overcome information asymmetry. We also characterize conditions under which ex-ante funding (traditional funding) and ex-post funding (payment for results) are equilibrium outcomes of the donor-NPO interaction. 4 - Using Optimization To Maximize Diversity In Engineering Discussion Groups at The University Of Michigan Kayse Lee Maass, University of Michigan, Ann Arbor, MI, United States, leekayse@umich.edu, Mark Daskin The College of Engineering at the University of Michigan assigns over 1250 freshmen to discussion groups for a “freshman reads” program. They want each group to be as diverse as possible with respect to (a) country of origin, (b) gender, (c) ethnicity, (d) in-state/out-of-state status, and (e) whether a student is the first in his/her generation to attend college. We implemented a large-scale optimization model to assign students to groups and have successfully used the results for two years. The model and implementation are discussed. TD19 106B-MCC Stable Sets, Zero-forcing Sets, and Target Sets in Graphs Sponsored: Computing Sponsored Session Chair: Balabhaskar Balasundaram, Oklahoma State University, 322 Engineering North, Stillwater, OK, 74078, United States, baski@okstate.edu 1 - Discrete vs. Continuous Formulations For Computing The Stability Number: Results And Insights Jitamitra Desai, Nanyang Technological University, jdesai@ntu.edu.sg, Jitamitra Desai, Nanyang Technological University, Singapore, Singapore, jdesai@ntu.edu.sg, Balabhaskar Balasundaram In this research, we compare and contrast discrete (0-1) and continuous formulations for determining stable (independent) sets of a graph. In this context, we define a new class of “vertex sets”, and derive an explicit characterization to compute the number of alternate optima present in both the 0-1 and fractional programming formulations. A byproduct of these vertex sets is another approach to determine maximal independent sets. We also pose a conjecture that relates the solution of the LP-relaxation to the 0-1 formulation to the number of alternate optima to the stable set problem. Finally, some preliminary computations on some standard testbed problems from the literature are presented.
2 - Building Ensembles Of Support Vector Machine Classifiers Through Multi Objective Optimization Ozgu Turgut, Wayne State University, ozguturgut@wayne.edu, Ratna Babu Chinnam The trade-off parameter ‘C’ of support vector machine (SVM) classifiers is the key input of training which tries to balance the generalization and empirical error. However existing methods in determining this parameter usually based on brute force search which is inefficient. The intend of this study is to incorporate a genuine exact multi objective optimization algorithm that generates representative Pareto optimal sets, with SVM in order to avoid C tuning phase. Utilization of the algorithm for additional two major purposes is also studied; namely, ensemble formation and confidence calculation of SVM classifiers. 3 - Multi-objective Optimization Under Multicollinearity Haidar Almohri, Wayne State University, almohri@wayne.edu, Ratna Babu Chinnam, Arash Ali While data driven process optimization methods are routine, several challenges arise when dealing with variables with strong multicollinearity. In practice, there might be multiple conflicting objectives as well, influenced by a common set of variables. We propose optimization algorithms to jointly optimize multiple objectives under multicollinearity. Methods are motivated by problems from retailing and manufacturing domains. TD17 105B-MCC Statistical Learning with Convex Optimization Sponsored: Optimization, Nonlinear Programming Sponsored Session Chair: Robert Freund, Massachusetts Institute of Technology, 100 Main Street, Cambridge, MA, 02142-1347, United States, rfreund@mit.edu Co-Chair: Rahul Mazumder, Massachusetts Institute of Technology, 100 Main Street, Cambridge, MA, 02142-1347, United States, rahulmaz@mit.edu 1 - An Optimal Aggregation Procedure For Nonparametric Exact oracle inequalities for regression have been extensively studied in statistics and learning theory over the past decade. In the case of a misspecified model, the focus has been on either parametric or convex classes. We present a new estimator that steps outside of the model in the non-convex case and reduces to least squares in the convex case. To analyze the estimator for general non- parametric classes, we prove a generalized Pythagorean theorem and study the supremum of a negative-mean stochastic process (which we term the offset Rademacher complexity) via the chaining technique. (joint work with T. Liang and K. Sridharan) 2 - Convex Regularization For High-dimensional Tensor Regression Ming Yuan, University of Wisconsin, myuan@stat.wisc.edu Data in the format of tensors, or multilinear arrays, arise naturally in many modern applications. In this talk, I shall discuss several examples where convex optimization approaches can be utilized for solving high-dimensional tensor problems under low-dimensional structural assumptions. 3 - Distributed Proximal Algorithms For Convex Optimization Garud Iyengar, Columbia University, garud@ieor.columbia.edu We propose a distributed first-order augmented Lagrangian algorithm to minimize the sum of composite convex functions, where each term in the sum is only known at one of the nodes, and only nodes connected by an edge can directly communicate with each other. This optimization model abstracts a number of applications in distributed sensing and machine learning. We show that any limit point of the iterates is optimal; and for any epsilon>0, an epsilon-optimal solution can be computed within O(log(1/epsilon)) iterations, whichrequire O((d_max^{1.5}/d_min) /epsilon}) communications steps, where d_max (resp. d_min) denotes the degree of largest (resp. smallest) degree node. 4 - Linear Estimation Through Unknown Non-linear Transformations Constantine Caramanis, University of Texas, constantine@utexas.edu Abstract to come. Regression With Convex And Non-convex Models Alexander Rakhlin, University of Pennsylvania, rakhlin@gmail.com
339
Made with FlippingBook