Sloan Working Papershttp://hdl.handle.net/1721.1/17922017-11-22T18:20:00Z2017-11-22T18:20:00ZThe Stochastic Container Relocation ProblemGalle, V.Borjian Boroujeni, S.Manshadi, V.H.Barnhart, C.Jaillet, P.http://hdl.handle.net/1721.1/1077622017-03-29T06:17:46Z2017-03-28T00:00:00ZThe Stochastic Container Relocation Problem
Galle, V.; Borjian Boroujeni, S.; Manshadi, V.H.; Barnhart, C.; Jaillet, P.
The Container Relocation Problem (CRP) is concerned with finding a sequence of moves of containers that minimizes the number of relocations needed to retrieve all containers, while respecting a given order of retrieval. However, the assumption of knowing the full retrieval order of containers
is particularly unrealistic in real operations. This paper studies the stochastic CRP (SCRP), which relaxes this assumption. A new multi-stage stochastic model, called the batch model, is introduced, motivated, and compared with an existing model (the online model). The two main contributions are an
optimal algorithm called Pruning-Best-First-Search (PBFS) and a randomized approximate algorithm called PBFS-Approximate with a bounded average error. Both algorithms, applicable in the batch and online models, are based on a new family of lower bounds for which we show some theoretical properties. Moreover, we introduce two new heuristics outperforming the best existing heuristics. Algorithms, bounds and heuristics are tested in an extensive computational section. Finally, based on strong computational evidence, we conjecture the optimality of the “Leveling” heuristic in a special “no information” case, where at any retrieval stage, any of the remaining containers is equally likely to be retrieved next.
2017-03-28T00:00:00ZTechnology Readiness Levels at 40: a study of state-of-the-art use, challenges, and opportunitiesOlechowski, AlisonEppinger, Steven D.Joglekar, Nitinhttp://hdl.handle.net/1721.1/963072015-04-02T06:20:07Z2015-04-01T00:00:00ZTechnology Readiness Levels at 40: a study of state-of-the-art use, challenges, and opportunities
Olechowski, Alison; Eppinger, Steven D.; Joglekar, Nitin
The technology readiness level (TRL) scale was introduced by NASA in the 1970s as a tool for assessing the maturity of technologies during complex system development. TRL data have been used to make multi-million dollar technology management decisions in programs such as NASA's Mars Curiosity Rover. This scale is now a de facto standard used for technology assessment and oversight in many industries, from power systems to consumer electronics. Low TRLs have been associated with significantly reduced timeliness and increased costs across a portfolio of US Department of Defense programs. However, anecdotal evidence raises concerns about many of the practices related to TRLs. We study TRL implementations based on semi-structured interviews with employees from seven different organizations and examine documentation collected from industry standards and organizational guidelines related to technology development and demonstration. Our findings consist of 15 challenges observed in TRL implementations that fall into three different categories: system complexity, planning and review, and validity of assessment. We explore research opportunities for these challenges and posit that addressing these opportunities, either singly or in groups, could improve decision processes and performance outcomes in complex engineering projects.
2015-04-01T00:00:00ZThe Big Data Newsvendor: Practical Insights from Machine LearningRudin, CynthiaVahn, Gah-Yihttp://hdl.handle.net/1721.1/856582014-03-15T06:18:24Z2014-02-06T00:00:00ZThe Big Data Newsvendor: Practical Insights from Machine Learning
Rudin, Cynthia; Vahn, Gah-Yi
We investigate the newsvendor problem when one has n observations of p features related to the demand as well as past demands. Both small data (p=n = o(1)) and big data (p=n = O(1)) are considered. For both cases, we propose a machine learning algorithm to solve the problem and derive a tight generalization bound on the expected out-of-sample cost. The algorithms can be extended intuitively to other situations, such as having censored demand data, ordering for multiple, similar items and having a new item with
limited data. We show analytically that our custom-designed, feature-based approach can be better than other data-driven approaches such as Sample Average Approximation (SAA) and separated estimation and optimization (SEO). Our method can also naturally incorporate the operational statistics method. We then apply the algorithms to nurse staffing in a hospital emergency room and show that (i) they can reduce the median out-of-sample cost by up to 46% and 16% compared to SAA and SEO respectively, with statistical significance at 0.01, and (ii) this is achieved either by carefully selecting a small number of features and applying the small data algorithm, or by using a large number of features and using the big data algorithm,
which automates feature-selection.
This is a revision of previously published DSpace entry: http://hdl.handle.net/1721.1/81412.
2014-02-06T00:00:00ZAn Interpretable Stroke Prediction Model using Rules and Bayesian AnalysisLetham, BenjaminRudin, CynthiaMcCormick, Tyler H.Madigan, Davidhttp://hdl.handle.net/1721.1/821482013-11-16T07:19:19Z2013-11-15T00:00:00ZAn Interpretable Stroke Prediction Model using Rules and Bayesian Analysis
Letham, Benjamin; Rudin, Cynthia; McCormick, Tyler H.; Madigan, David
We aim to produce predictive models that are not only accurate, but are also interpretable to human experts. Our models are decision lists, which consist of a series of if...then... statements (for example, if high blood pressure, then stroke) that discretize a high-dimensional, multivariate feature space into a series of simple, readily inter-
pretable decision statements. We introduce a generative model called the Bayesian List Machine which yields a posterior distribution over possible decision lists. It employs a novel prior structure to encourage sparsity. Our experiments show that the Bayesian List Machine has predictive accuracy on par with the current top algorithms
for prediction in machine learning. Our method is motivated by recent developments in personalized medicine, and can be used to produce highly accurate and interpretable medical scoring systems. We demonstrate this by producing an alternative to the CHADS2 score, actively used in clinical practice for estimating the risk of stroke in patients that have atrial brillation. Our model is as interpretable as CHADS2, but more accurate.
2013-11-15T00:00:00Z