Informs Annual Meeting Phoenix 2018

INFORMS Phoenix – 2018

WB64

3 - Patient - Demographic and Health Factors Influencing the Length of Stay in Hospital Surya Ayyalasomayajula, Oklahoma State University, Stillwater, OK, 74074, United States, Ankita Srivastava, Dursun Delen This study explores the question of what are the demographic and general health factors that predict the length of stay (LOS) of a patient in a hospital. It conceptualizes that general health condition has a major impact on the LOS followed by demographics. The paper than studies the hospital factors that influence the average length of stay (ALOS) or Hospital Length of Stay (HLOS). Data from 22 hospitals and 5553 patients strongly support the proposed idea of LOS is determined by general health and demographics. The results do not support the idea that ALOS is dependent on hospital factors. 4 - A DEA Evaluation of States’ Infant Mortality Rate in the U.S. Negar Darabi, PhD Student, Virginia Tech, Blacksburg, VA, States vary in terms of their infant mortality rates (IMR). Here, we build a state- level database to compare 50 states’ performance with respect to three major variables including IMR, preterm birth, and low birth weight. We use a Data Envelopment Analysis (DEA) approach to test different factors associated with high performance in infant survival by benchmarking states. Prior studies, examined IMR of neighboring states rather than using a mathematical model for choosing their peers. DEA finds the best practices for states that suffer from poor outcomes (i.e., high rate of infant mortality). The results of this analysis would be beneficial for policymakers to implement effective interventions. 5 - Evaluation of Alternative Diagnostic Test Intervals and Thresholds for Lungrads Criteria on the Effectiveness of Lung Cancer Screening 24060-4913, United States, Alireza Ebrahimvandi, Niyousha Hosseinichimeh, Konstantinos P. Triantis U.S. Preventive Services Task Force recently recommended a low-dose computed tomography (LDCT) lung screening for high-risk current and former smokers based on the National Lung Screening Trial (NLST). In response to the high rates of false-positive observed in NLST (27.3%), the American College of Radiology developed Lung-RADS, a standardized system for reporting and following-up LDCT findings. Several studies have shown reduction in false-positive rate when Lung-RADs is applied to NLST. To complement these studies, we evaluate the effect of alternative diagnostic testing intervals and actionable nodule size thresholds of Lung-RADs on the mortality reduction associated with LC screening. Joint Session DM/Practice Curated: Data Science for Decision Support Sponsored: Data Mining Sponsored Session Chair: Kazim Topuz, University of Tulsa, 800 Tucker Ave, Tulsa, OK, 74104, United States 1 - Designing Early Detection and Intervention Techniques via Predictive Models for Bottleneck Business Courses Sinjini Mitra, Associate Professor, California State University- Fullerton, 800 N. State College Boulevard, ISDS Department, Fullerton, CA, 92831, United States, Zvi Goldstein We present a study of factors affecting student success in two bottleneck Business courses, and use subsets of them to build predictive models of student success. They can be utilized to detect at-risk students early on for implementing suitable intervention techniques to improve their odds of completing the courses successfully. The results that show that students who receive the intervention and take advantage of it, have significantly improved performance at the end of each course compared to those who do not. We conclude by briefly discussing Supplemental Instruction as an academic support program that benefits such at- risk students greatly. 2 - Should Low Rated Items be Recommended? An Empirical Analysis Sanjog Ray, Indian Institute of Management-Indore, Rau Pithampur Road, Faculty Block A-202, Indore, 453331, India Collaborative filtering is the most popular approach used in recommender systems for recommending items likes movies, books etc. Items that a user will most likely rate high are recommended as a result low rated items are never recommended. This paper questions the approach of ignoring low rated items by the recommender systems algorithms. Based on our analysis of two large datasets on movies and books, we show that low rated movies should not be ignored in the final list of recommendations. We also provide suggestions on how low rated movies can be recommended. Mehrad Bastani, Postdoctoral Scholar, Stanford University, 305 Campus Drive, Palo Alto, CA, 94305, United States, Sylvia Plevritis, Iakovos Toumazis, Ann Leung n WB64 West Bldg 104A

3 - Post-traumatic Stress Disorder (PTSD) Diagnosis & Prediction: A Bayesian Network Model Yi Tan, University of Kansas School of Business, 1654 Naismith Drive, Lawrence, KS, 66045, United States, Prakash P. Shenoy, Catherine Shenoy, Mary Oehlert In this study, we first propose a Bayesian network model for post-traumatic stress disorder (PTSD) prediction. By using Veteran Administration patient data between 2000 and 2015, the model is constructed based on patients’ demographic information, military history, other accompanied mental disorders, and various psychological tests. Psychological tests are usually required to diagnose/confirm PTSD. To aid the diagnosis, we are also working on a decision support technique that psychiatrists can use to decide which psychological tests, and in what sequence, that a new patient should take. The technique will identify the most informative tests based on information theory. 4 - Predicting and Understanding Freshmen Student Retention- Development of a Bayesian Belief Network-based DSS Kazim Topuz, Assistant Professor, PhD, University of Tulsa, 1826 23rd Avenue SE, Norman, OK, 73071-1065, United States, Dursun Delen Student attrition is an administratively important, and yet practically challenging problem for decision makers and researchers. This study aims to find the prominent variables and their conditional dependencies/ interrelations that effect student attrition in college settings. Specifically, using a large and feature-rich dataset, proposed methodology successfully captures the probabilistic interactions between attrition and related factors to reveal the underlying, nonlinear relationships. Chair: Hari Bandi, MIT, Cambridge, MA, 02139, United States 1 - Practical Robust Optimization for Least-squares Problems Long Zhao, UT Mccombs Bussiness School, 2110 Speedway Stop B6500, CBA 5.334 Q, Austin, TX, 78712-1277, United States, Deepayan Chakrabarti, Kumar Muthuraman Solution to Robust optimization formulations is sometimes too conservative because of the worst-case performance objective. For the least-squares problem, we describe a way to overcome this by combining its Robust version with the classical formulation. The talk describes the method and how it leverages on a deeper understanding of estimation errors to help improve out of sample performance. For more than 50 different real-world scenarios, the method consistently outperforms other methods that ignore such space-level information and outperforms both ridge and lasso regression most of the time. 2 - Bootstrap Robust Prescriptive Analytics Bart Paul Gerard Van Parys, MIT, Room E40-154, 77 Massachusetts Avenue, Cambridge, MA, 02139, United States, Dimitris Bertsimas We discuss prescribing optimal decisions in a framework where the cost depends on uncertain problem parameters that need to be learned from data. Proper prescriptive methods exploit additional observed contextual information on a large number of covariates. Naive use of training data may lead to gullible decisions over-calibrated to one particular data set. We use robust optimization and the bootstrap to propose two novel prescriptive methods. Both resulting robust prescriptive methods reduce to tractable convex optimization problems and enjoy a limited disappointment on bootstrap data. 3 - Learning a Mixture of Gaussians via Mixed Integer Optimization Hari Bandi, MIT, Cambridge, MA, 02139, United States We consider the problem of estimating the parameters of a Gaussian mixture model (GMM) given access to n samples x1, x2, ... , xn ? R^d that are believed to have come from a mixture of multiple subpopulations. We present here a novel MIO formulation that optimally recovers the parameters of the GMM by minimizing a discrepancy measure between the empirical distribution function and the distribution function of the GMM. We show that the MIO approaches are practically solvable for datasets with n in the tens of thousands in minutes and achieve an average improvement of 60-70% and 50-60% on MAPE in estimating the means and the covariance matrices, respectively over the EM algorithm independent of the sample size. n WB65 West Bldg 104B Joint Session DM/Practice Curated: Data Science and Robust Optimization Sponsored: Data Mining Sponsored Session

464

Made with FlippingBook - Online magazine maker