i Larry Brown’s Work on Admissibility By projecteuclid.org Published On :: Wed, 08 Jan 2020 04:00 EST Iain M. Johnstone. Source: Statistical Science, Volume 34, Number 4, 657--668.Abstract: Many papers in the early part of Brown’s career focused on the admissibility or otherwise of estimators of a vector parameter. He established that inadmissibility of invariant estimators in three and higher dimensions is a general phenomenon, and found deep and beautiful connections between admissibility and other areas of mathematics. This review touches on several of his major contributions, with a focus on his celebrated 1971 paper connecting admissibility, recurrence and elliptic partial differential equations. Full Article
i Gaussianization Machines for Non-Gaussian Function Estimation Models By projecteuclid.org Published On :: Wed, 08 Jan 2020 04:00 EST T. Tony Cai. Source: Statistical Science, Volume 34, Number 4, 635--656.Abstract: A wide range of nonparametric function estimation models have been studied individually in the literature. Among them the homoscedastic nonparametric Gaussian regression is arguably the best known and understood. Inspired by the asymptotic equivalence theory, Brown, Cai and Zhou ( Ann. Statist. 36 (2008) 2055–2084; Ann. Statist. 38 (2010) 2005–2046) and Brown et al. ( Probab. Theory Related Fields 146 (2010) 401–433) developed a unified approach to turn a collection of non-Gaussian function estimation models into a standard Gaussian regression and any good Gaussian nonparametric regression method can then be used. These Gaussianization Machines have two key components, binning and transformation. When combined with BlockJS, a wavelet thresholding procedure for Gaussian regression, the procedures are computationally efficient with strong theoretical guarantees. Technical analysis given in Brown, Cai and Zhou ( Ann. Statist. 36 (2008) 2055–2084; Ann. Statist. 38 (2010) 2005–2046) and Brown et al. ( Probab. Theory Related Fields 146 (2010) 401–433) shows that the estimators attain the optimal rate of convergence adaptively over a large set of Besov spaces and across a collection of non-Gaussian function estimation models, including robust nonparametric regression, density estimation, and nonparametric regression in exponential families. The estimators are also spatially adaptive. The Gaussianization Machines significantly extend the flexibility and scope of the theories and methodologies originally developed for the conventional nonparametric Gaussian regression. This article aims to provide a concise account of the Gaussianization Machines developed in Brown, Cai and Zhou ( Ann. Statist. 36 (2008) 2055–2084; Ann. Statist. 38 (2010) 2005–2046), Brown et al. ( Probab. Theory Related Fields 146 (2010) 401–433). Full Article
i Larry Brown’s Contributions to Parametric Inference, Decision Theory and Foundations: A Survey By projecteuclid.org Published On :: Wed, 08 Jan 2020 04:00 EST James O. Berger, Anirban DasGupta. Source: Statistical Science, Volume 34, Number 4, 621--634.Abstract: This article gives a panoramic survey of the general area of parametric statistical inference, decision theory and foundations of statistics for the period 1965–2010 through the lens of Larry Brown’s contributions to varied aspects of this massive area. The article goes over sufficiency, shrinkage estimation, admissibility, minimaxity, complete class theorems, estimated confidence, conditional confidence procedures, Edgeworth and higher order asymptotic expansions, variational Bayes, Stein’s SURE, differential inequalities, geometrization of convergence rates, asymptotic equivalence, aspects of empirical process theory, inference after model selection, unified frequentist and Bayesian testing, and Wald’s sequential theory. A reasonably comprehensive bibliography is provided. Full Article
i Models as Approximations—Rejoinder By projecteuclid.org Published On :: Wed, 08 Jan 2020 04:00 EST Andreas Buja, Arun Kumar Kuchibhotla, Richard Berk, Edward George, Eric Tchetgen Tchetgen, Linda Zhao. Source: Statistical Science, Volume 34, Number 4, 606--620.Abstract: We respond to the discussants of our articles emphasizing the importance of inference under misspecification in the context of the reproducibility/replicability crisis. Along the way, we discuss the roles of diagnostics and model building in regression as well as connections between our well-specification framework and semiparametric theory. Full Article
i Discussion: Models as Approximations By projecteuclid.org Published On :: Wed, 08 Jan 2020 04:00 EST Dalia Ghanem, Todd A. Kuffner. Source: Statistical Science, Volume 34, Number 4, 604--605. Full Article
i Comment: Statistical Inference from a Predictive Perspective By projecteuclid.org Published On :: Wed, 08 Jan 2020 04:00 EST Alessandro Rinaldo, Ryan J. Tibshirani, Larry Wasserman. Source: Statistical Science, Volume 34, Number 4, 599--603.Abstract: What is the meaning of a regression parameter? Why is this the de facto standard object of interest for statistical inference? These are delicate issues, especially when the model is misspecified. We argue that focusing on predictive quantities may be a desirable alternative. Full Article
i Comment: Models as (Deliberate) Approximations By projecteuclid.org Published On :: Wed, 08 Jan 2020 04:00 EST David Whitney, Ali Shojaie, Marco Carone. Source: Statistical Science, Volume 34, Number 4, 591--598. Full Article
i Comment: Models Are Approximations! By projecteuclid.org Published On :: Wed, 08 Jan 2020 04:00 EST Anthony C. Davison, Erwan Koch, Jonathan Koh. Source: Statistical Science, Volume 34, Number 4, 584--590.Abstract: This discussion focuses on areas of disagreement with the papers, particularly the target of inference and the case for using the robust ‘sandwich’ variance estimator in the presence of moderate mis-specification. We also suggest that existing procedures may be appreciably more powerful for detecting mis-specification than the authors’ RAV statistic, and comment on the use of the pairs bootstrap in balanced situations. Full Article
i Comment: “Models as Approximations I: Consequences Illustrated with Linear Regression” by A. Buja, R. Berk, L. Brown, E. George, E. Pitkin, L. Zhan and K. Zhang By projecteuclid.org Published On :: Wed, 08 Jan 2020 04:00 EST Roderick J. Little. Source: Statistical Science, Volume 34, Number 4, 580--583. Full Article
i Discussion of Models as Approximations I & II By projecteuclid.org Published On :: Wed, 08 Jan 2020 04:00 EST Dag Tjøstheim. Source: Statistical Science, Volume 34, Number 4, 575--579. Full Article
i Comment: Models as Approximations By projecteuclid.org Published On :: Wed, 08 Jan 2020 04:00 EST Nikki L. B. Freeman, Xiaotong Jiang, Owen E. Leete, Daniel J. Luckett, Teeranan Pokaprakarn, Michael R. Kosorok. Source: Statistical Science, Volume 34, Number 4, 572--574. Full Article
i Comment on Models as Approximations, Parts I and II, by Buja et al. By projecteuclid.org Published On :: Wed, 08 Jan 2020 04:00 EST Jerald F. Lawless. Source: Statistical Science, Volume 34, Number 4, 569--571.Abstract: I comment on the papers Models as Approximations I and II, by A. Buja, R. Berk, L. Brown, E. George, E. Pitkin, M. Traskin, L. Zhao and K. Zhang. Full Article
i Discussion of Models as Approximations I & II By projecteuclid.org Published On :: Wed, 08 Jan 2020 04:00 EST Sara van de Geer. Source: Statistical Science, Volume 34, Number 4, 566--568.Abstract: We discuss the papers “Models as Approximations” I & II, by A. Buja, R. Berk, L. Brown, E. George, E. Pitkin, M. Traskin, L. Zao and K. Zhang (Part I) and A. Buja, L. Brown, A. K. Kuchibhota, R. Berk, E. George and L. Zhao (Part II). We present a summary with some details for the generalized linear model. Full Article
i Models as Approximations II: A Model-Free Theory of Parametric Regression By projecteuclid.org Published On :: Wed, 08 Jan 2020 04:00 EST Andreas Buja, Lawrence Brown, Arun Kumar Kuchibhotla, Richard Berk, Edward George, Linda Zhao. Source: Statistical Science, Volume 34, Number 4, 545--565.Abstract: We develop a model-free theory of general types of parametric regression for i.i.d. observations. The theory replaces the parameters of parametric models with statistical functionals, to be called “regression functionals,” defined on large nonparametric classes of joint ${x extrm{-}y}$ distributions, without assuming a correct model. Parametric models are reduced to heuristics to suggest plausible objective functions. An example of a regression functional is the vector of slopes of linear equations fitted by OLS to largely arbitrary ${x extrm{-}y}$ distributions, without assuming a linear model (see Part I). More generally, regression functionals can be defined by minimizing objective functions, solving estimating equations, or with ad hoc constructions. In this framework, it is possible to achieve the following: (1) define a notion of “well-specification” for regression functionals that replaces the notion of correct specification of models, (2) propose a well-specification diagnostic for regression functionals based on reweighting distributions and data, (3) decompose sampling variability of regression functionals into two sources, one due to the conditional response distribution and another due to the regressor distribution interacting with misspecification, both of order $N^{-1/2}$, (4) exhibit plug-in/sandwich estimators of standard error as limit cases of ${x extrm{-}y}$ bootstrap estimators, and (5) provide theoretical heuristics to indicate that ${x extrm{-}y}$ bootstrap standard errors may generally be preferred over sandwich estimators. Full Article
i Models as Approximations I: Consequences Illustrated with Linear Regression By projecteuclid.org Published On :: Wed, 08 Jan 2020 04:00 EST Andreas Buja, Lawrence Brown, Richard Berk, Edward George, Emil Pitkin, Mikhail Traskin, Kai Zhang, Linda Zhao. Source: Statistical Science, Volume 34, Number 4, 523--544.Abstract: In the early 1980s, Halbert White inaugurated a “model-robust” form of statistical inference based on the “sandwich estimator” of standard error. This estimator is known to be “heteroskedasticity-consistent,” but it is less well known to be “nonlinearity-consistent” as well. Nonlinearity, however, raises fundamental issues because in its presence regressors are not ancillary, hence cannot be treated as fixed. The consequences are deep: (1) population slopes need to be reinterpreted as statistical functionals obtained from OLS fits to largely arbitrary joint ${x extrm{-}y}$ distributions; (2) the meaning of slope parameters needs to be rethought; (3) the regressor distribution affects the slope parameters; (4) randomness of the regressors becomes a source of sampling variability in slope estimates of order $1/sqrt{N}$; (5) inference needs to be based on model-robust standard errors, including sandwich estimators or the ${x extrm{-}y}$ bootstrap. In theory, model-robust and model-trusting standard errors can deviate by arbitrary magnitudes either way. In practice, significant deviations between them can be detected with a diagnostic test. Full Article
i A Conversation with Peter Diggle By projecteuclid.org Published On :: Fri, 11 Oct 2019 04:03 EDT Peter M. Atkinson, Jorge Mateu. Source: Statistical Science, Volume 34, Number 3, 504--521.Abstract: Peter John Diggle was born on February 24, 1950, in Lancashire, England. Peter went to school in Scotland, and it was at the end of his school years that he found that he was good at maths and actually enjoyed it. Peter went to Edinburgh to do a maths degree, but transferred halfway through to Liverpool where he completed his degree. Peter studied for a year at Oxford and was then appointed in 1974 as a lecturer in statistics at the University of Newcastle-upon-Tyne where he gained his PhD, and was promoted to Reader in 1983. A sabbatical at the Swedish Royal College of Forestry gave him his first exposure to real scientific data and problems, prompting a move to CSIRO, Australia. After five years with CSIRO where he was Senior, then Principal, then Chief Research Scientist and Chief of the Division of Mathematics and Statistics, he returned to the UK in 1988, to a Chair at Lancaster University. Since 2011 Peter has held appointments at Lancaster and Liverpool, together with honorary appointments at Johns Hopkins, Columbia and Yale. At Lancaster, Peter was the founder and Director of the Medical Statistics Unit (1995–2001), University Dean for Research (1998–2001), EPSRC Senior Fellow (2004–2008), Associate Dean for Research at the School of Health and Medicine (2007–2011), Distinguished University Professor, and leader of the CHICAS Research Group (2007–2017). A Fellow of the Royal Statistical Society since 1974, he was a Member of Council (1983–1985), Joint Editor of JRSSB (1984–1987), Honorary Secretary (1990–1996), awarded the Guy Medal in Silver (1997) and the Barnett Award (2018), Associate Editor of Applied Statistics (1998–2000), Chair of the Research Section Committee (1998–2000), and President (2014–2016). Away from work, Peter enjoys music, playing folk-blues guitar and tenor recorder, and listening to jazz. His running days are behind him, but he can just about hold his own in mixed-doubles badminton with his family. His boyhoood hero was Stirling Moss, and he retains an enthusiasm for classic cars, not least his 1988 Porsche 924S. His favorite authors are George Orwell, Primo Levi and Nigel Slater. This interview was done prior to the fourth Spatial Statistics conference held in Lancaster, July 2017 where a session was dedicated to Peter celebrating his contributions to statistics. Full Article
i Assessing the Causal Effect of Binary Interventions from Observational Panel Data with Few Treated Units By projecteuclid.org Published On :: Fri, 11 Oct 2019 04:03 EDT Pantelis Samartsidis, Shaun R. Seaman, Anne M. Presanis, Matthew Hickman, Daniela De Angelis. Source: Statistical Science, Volume 34, Number 3, 486--503.Abstract: Researchers are often challenged with assessing the impact of an intervention on an outcome of interest in situations where the intervention is nonrandomised, the intervention is only applied to one or few units, the intervention is binary, and outcome measurements are available at multiple time points. In this paper, we review existing methods for causal inference in these situations. We detail the assumptions underlying each method, emphasize connections between the different approaches and provide guidelines regarding their practical implementation. Several open problems are identified thus highlighting the need for future research. Full Article
i Conditionally Conjugate Mean-Field Variational Bayes for Logistic Models By projecteuclid.org Published On :: Fri, 11 Oct 2019 04:03 EDT Daniele Durante, Tommaso Rigon. Source: Statistical Science, Volume 34, Number 3, 472--485.Abstract: Variational Bayes (VB) is a common strategy for approximate Bayesian inference, but simple methods are only available for specific classes of models including, in particular, representations having conditionally conjugate constructions within an exponential family. Models with logit components are an apparently notable exception to this class, due to the absence of conjugacy among the logistic likelihood and the Gaussian priors for the coefficients in the linear predictor. To facilitate approximate inference within this widely used class of models, Jaakkola and Jordan ( Stat. Comput. 10 (2000) 25–37) proposed a simple variational approach which relies on a family of tangent quadratic lower bounds of the logistic log-likelihood, thus restoring conjugacy between these approximate bounds and the Gaussian priors. This strategy is still implemented successfully, but few attempts have been made to formally understand the reasons underlying its excellent performance. Following a review on VB for logistic models, we cover this gap by providing a formal connection between the above bound and a recent Pólya-gamma data augmentation for logistic regression. Such a result places the computational methods associated with the aforementioned bounds within the framework of variational inference for conditionally conjugate exponential family models, thereby allowing recent advances for this class to be inherited also by the methods relying on Jaakkola and Jordan ( Stat. Comput. 10 (2000) 25–37). Full Article
i User-Friendly Covariance Estimation for Heavy-Tailed Distributions By projecteuclid.org Published On :: Fri, 11 Oct 2019 04:03 EDT Yuan Ke, Stanislav Minsker, Zhao Ren, Qiang Sun, Wen-Xin Zhou. Source: Statistical Science, Volume 34, Number 3, 454--471.Abstract: We provide a survey of recent results on covariance estimation for heavy-tailed distributions. By unifying ideas scattered in the literature, we propose user-friendly methods that facilitate practical implementation. Specifically, we introduce elementwise and spectrumwise truncation operators, as well as their $M$-estimator counterparts, to robustify the sample covariance matrix. Different from the classical notion of robustness that is characterized by the breakdown property, we focus on the tail robustness which is evidenced by the connection between nonasymptotic deviation and confidence level. The key insight is that estimators should adapt to the sample size, dimensionality and noise level to achieve optimal tradeoff between bias and robustness. Furthermore, to facilitate practical implementation, we propose data-driven procedures that automatically calibrate the tuning parameters. We demonstrate their applications to a series of structured models in high dimensions, including the bandable and low-rank covariance matrices and sparse precision matrices. Numerical studies lend strong support to the proposed methods. Full Article
i The Geometry of Continuous Latent Space Models for Network Data By projecteuclid.org Published On :: Fri, 11 Oct 2019 04:03 EDT Anna L. Smith, Dena M. Asta, Catherine A. Calder. Source: Statistical Science, Volume 34, Number 3, 428--453.Abstract: We review the class of continuous latent space (statistical) models for network data, paying particular attention to the role of the geometry of the latent space. In these models, the presence/absence of network dyadic ties are assumed to be conditionally independent given the dyads’ unobserved positions in a latent space. In this way, these models provide a probabilistic framework for embedding network nodes in a continuous space equipped with a geometry that facilitates the description of dependence between random dyadic ties. Specifically, these models naturally capture homophilous tendencies and triadic clustering, among other common properties of observed networks. In addition to reviewing the literature on continuous latent space models from a geometric perspective, we highlight the important role the geometry of the latent space plays on properties of networks arising from these models via intuition and simulation. Finally, we discuss results from spectral graph theory that allow us to explore the role of the geometry of the latent space, independent of network size. We conclude with conjectures about how these results might be used to infer the appropriate latent space geometry from observed networks. Full Article
i An Overview of Semiparametric Extensions of Finite Mixture Models By projecteuclid.org Published On :: Fri, 11 Oct 2019 04:03 EDT Sijia Xiang, Weixin Yao, Guangren Yang. Source: Statistical Science, Volume 34, Number 3, 391--404.Abstract: Finite mixture models have offered a very important tool for exploring complex data structures in many scientific areas, such as economics, epidemiology and finance. Semiparametric mixture models, which were introduced into traditional finite mixture models in the past decade, have brought forth exciting developments in their methodologies, theories, and applications. In this article, we not only provide a selective overview of the newly-developed semiparametric mixture models, but also discuss their estimation methodologies, theoretical properties if applicable, and some open questions. Recent developments are also discussed. Full Article
i ROS Regression: Integrating Regularization with Optimal Scaling Regression By projecteuclid.org Published On :: Fri, 11 Oct 2019 04:03 EDT Jacqueline J. Meulman, Anita J. van der Kooij, Kevin L. W. Duisters. Source: Statistical Science, Volume 34, Number 3, 361--390.Abstract: We present a methodology for multiple regression analysis that deals with categorical variables (possibly mixed with continuous ones), in combination with regularization, variable selection and high-dimensional data ($Pgg N$). Regularization and optimal scaling (OS) are two important extensions of ordinary least squares regression (OLS) that will be combined in this paper. There are two data analytic situations for which optimal scaling was developed. One is the analysis of categorical data, and the other the need for transformations because of nonlinear relationships between predictors and outcome. Optimal scaling of categorical data finds quantifications for the categories, both for the predictors and for the outcome variables, that are optimal for the regression model in the sense that they maximize the multiple correlation. When nonlinear relationships exist, nonlinear transformation of predictors and outcome maximize the multiple correlation in the same way. We will consider a variety of transformation types; typically we use step functions for categorical variables, and smooth (spline) functions for continuous variables. Both types of functions can be restricted to be monotonic, preserving the ordinal information in the data. In combination with optimal scaling, three popular regularization methods will be considered: Ridge regression, the Lasso and the Elastic Net. The resulting method will be called ROS Regression (Regularized Optimal Scaling Regression). The OS algorithm provides straightforward and efficient estimation of the regularized regression coefficients, automatically gives the Group Lasso and Blockwise Sparse Regression, and extends them by the possibility to maintain ordinal properties in the data. Extended examples are provided. Full Article
i A Conversation with Noel Cressie By projecteuclid.org Published On :: Thu, 18 Jul 2019 22:01 EDT Christopher K. Wikle, Jay M. Ver Hoef. Source: Statistical Science, Volume 34, Number 2, 349--359.Abstract: Noel Cressie, FAA is Director of the Centre for Environmental Informatics in the National Institute for Applied Statistics Research Australia (NIASRA) and Distinguished Professor in the School of Mathematics and Applied Statistics at the University of Wollongong, Australia. He is also Adjunct Professor at the University of Missouri (USA), Affiliate of Org 398, Science Data Understanding, at NASA’s Jet Propulsion Laboratory (USA), and a member of the Science Team for NASA’s Orbiting Carbon Observatory-2 (OCO-2) satellite. Cressie was awarded a B.Sc. with First Class Honours in Mathematics in 1972 from the University of Western Australia, and an M.A. and Ph.D. in Statistics in 1973 and 1975, respectively, from Princeton University (USA). Two brief postdoctoral periods followed, at the Centre de Morphologie Mathématique, ENSMP, in Fontainebleau (France) from April 1975–September 1975, and at Imperial College, London (UK) from September 1975–January 1976. His past appointments have been at The Flinders University of South Australia from 1976–1983, at Iowa State University (USA) from 1983–1998, and at The Ohio State University (USA) from 1998–2012. He has authored or co-authored four books and more than 280 papers in peer-reviewed outlets, covering areas that include spatial and spatio-temporal statistics, environmental statistics, empirical-Bayesian and Bayesian methods including sequential design, goodness-of-fit, and remote sensing of the environment. Many of his papers also address important questions in the sciences. Cressie is a Fellow of the Australian Academy of Science, the American Statistical Association, the Institute of Mathematical Statistics, and the Spatial Econometrics Association, and he is an Elected Member of the International Statistical Institute. Noel Cressie’s refereed, unrefereed, and other publications are available at: https://niasra.uow.edu.au/cei/people/UOW232444.html. Full Article
i A Conversation with Robert E. Kass By projecteuclid.org Published On :: Thu, 18 Jul 2019 22:01 EDT Sam Behseta. Source: Statistical Science, Volume 34, Number 2, 334--348.Abstract: Rob Kass has been been on the faculty of the Department of Statistics at Carnegie Mellon since 1981; he joined the Center for the Neural Basis of Cognition (CNBC) in 1997, and the Machine Learning Department (in the School of Computer Science) in 2007. He served as Department Head of Statistics from 1995 to 2004 and served as Interim Co-Director of the CNBC 2015–2018. He became the Maurice Falk Professor of Statistics and Computational Neuroscience in 2016. Kass has served as Chair of the Section for Bayesian Statistical Science of the American Statistical Association, Chair of the Statistics Section of the American Association for the Advancement of Science, founding Editor-in-Chief of the journal Bayesian Analysis and Executive Editor of Statistical Science . He is an elected Fellow of the American Statistical Association, the Institute of Mathematical Statistics and the American Association for the Advancement of Science. He has been recognized by the Institute for Scientific Information as one of the 10 most highly cited researchers, 1995–2005, in the category of mathematics. Kass is the recipient of the 2017 Fisher Award and lectureship by the Committee of the Presidents of the Statistical Societies. This interview took place at Carnegie Mellon University in November 2017. Full Article
i Two-Sample Instrumental Variable Analyses Using Heterogeneous Samples By projecteuclid.org Published On :: Thu, 18 Jul 2019 22:01 EDT Qingyuan Zhao, Jingshu Wang, Wes Spiller, Jack Bowden, Dylan S. Small. Source: Statistical Science, Volume 34, Number 2, 317--333.Abstract: Instrumental variable analysis is a widely used method to estimate causal effects in the presence of unmeasured confounding. When the instruments, exposure and outcome are not measured in the same sample, Angrist and Krueger ( J. Amer. Statist. Assoc. 87 (1992) 328–336) suggested to use two-sample instrumental variable (TSIV) estimators that use sample moments from an instrument-exposure sample and an instrument-outcome sample. However, this method is biased if the two samples are from heterogeneous populations so that the distributions of the instruments are different. In linear structural equation models, we derive a new class of TSIV estimators that are robust to heterogeneous samples under the key assumption that the structural relations in the two samples are the same. The widely used two-sample two-stage least squares estimator belongs to this class. It is generally not asymptotically efficient, although we find that it performs similarly to the optimal TSIV estimator in most practical situations. We then attempt to relax the linearity assumption. We find that, unlike one-sample analyses, the TSIV estimator is not robust to misspecified exposure model. Additionally, to nonparametrically identify the magnitude of the causal effect, the noise in the exposure must have the same distributions in the two samples. However, this assumption is in general untestable because the exposure is not observed in one sample. Nonetheless, we may still identify the sign of the causal effect in the absence of homogeneity of the noise. Full Article
i Producing Official County-Level Agricultural Estimates in the United States: Needs and Challenges By projecteuclid.org Published On :: Thu, 18 Jul 2019 22:01 EDT Nathan B. Cruze, Andreea L. Erciulescu, Balgobin Nandram, Wendy J. Barboza, Linda J. Young. Source: Statistical Science, Volume 34, Number 2, 301--316.Abstract: In the United States, county-level estimates of crop yield, production, and acreage published by the United States Department of Agriculture’s National Agricultural Statistics Service (USDA NASS) play an important role in determining the value of payments allotted to farmers and ranchers enrolled in several federal programs. Given the importance of these official county-level crop estimates, NASS continually strives to improve its crops county estimates program in terms of accuracy, reliability and coverage. In 2015, NASS engaged a panel of experts convened under the auspices of the National Academies of Sciences, Engineering, and Medicine Committee on National Statistics (CNSTAT) for guidance on implementing models that may synthesize multiple sources of information into a single estimate, provide defensible measures of uncertainty, and potentially increase the number of publishable county estimates. The final report titled Improving Crop Estimates by Integrating Multiple Data Sources was released in 2017. This paper discusses several needs and requirements for NASS county-level crop estimates that were illuminated during the activities of the CNSTAT panel. A motivating example of planted acreage estimation in Illinois illustrates several challenges that NASS faces as it considers adopting any explicit model for official crops county estimates. Full Article
i The Importance of Being Clustered: Uncluttering the Trends of Statistics from 1970 to 2015 By projecteuclid.org Published On :: Thu, 18 Jul 2019 22:01 EDT Laura Anderlucci, Angela Montanari, Cinzia Viroli. Source: Statistical Science, Volume 34, Number 2, 280--300.Abstract: In this paper, we retrace the recent history of statistics by analyzing all the papers published in five prestigious statistical journals since 1970, namely: The Annals of Statistics , Biometrika , Journal of the American Statistical Association , Journal of the Royal Statistical Society, Series B and Statistical Science . The aim is to construct a kind of “taxonomy” of the statistical papers by organizing and clustering them in main themes. In this sense being identified in a cluster means being important enough to be uncluttered in the vast and interconnected world of the statistical research. Since the main statistical research topics naturally born, evolve or die during time, we will also develop a dynamic clustering strategy, where a group in a time period is allowed to migrate or to merge into different groups in the following one. Results show that statistics is a very dynamic and evolving science, stimulated by the rise of new research questions and types of data. Full Article
i Statistical Analysis of Zero-Inflated Nonnegative Continuous Data: A Review By projecteuclid.org Published On :: Thu, 18 Jul 2019 22:01 EDT Lei Liu, Ya-Chen Tina Shih, Robert L. Strawderman, Daowen Zhang, Bankole A. Johnson, Haitao Chai. Source: Statistical Science, Volume 34, Number 2, 253--279.Abstract: Zero-inflated nonnegative continuous (or semicontinuous) data arise frequently in biomedical, economical, and ecological studies. Examples include substance abuse, medical costs, medical care utilization, biomarkers (e.g., CD4 cell counts, coronary artery calcium scores), single cell gene expression rates, and (relative) abundance of microbiome. Such data are often characterized by the presence of a large portion of zero values and positive continuous values that are skewed to the right and heteroscedastic. Both of these features suggest that no simple parametric distribution may be suitable for modeling such type of outcomes. In this paper, we review statistical methods for analyzing zero-inflated nonnegative outcome data. We will start with the cross-sectional setting, discussing ways to separate zero and positive values and introducing flexible models to characterize right skewness and heteroscedasticity in the positive values. We will then present models of correlated zero-inflated nonnegative continuous data, using random effects to tackle the correlation on repeated measures from the same subject and that across different parts of the model. We will also discuss expansion to related topics, for example, zero-inflated count and survival data, nonlinear covariate effects, and joint models of longitudinal zero-inflated nonnegative continuous data and survival. Finally, we will present applications to three real datasets (i.e., microbiome, medical costs, and alcohol drinking) to illustrate these methods. Example code will be provided to facilitate applications of these methods. Full Article
i A Kernel Regression Procedure in the 3D Shape Space with an Application to Online Sales of Children’s Wear By projecteuclid.org Published On :: Thu, 18 Jul 2019 22:01 EDT Gregorio Quintana-Ortí, Amelia Simó. Source: Statistical Science, Volume 34, Number 2, 236--252.Abstract: This paper is focused on kernel regression when the response variable is the shape of a 3D object represented by a configuration matrix of landmarks. Regression methods on this shape space are not trivial because this space has a complex finite-dimensional Riemannian manifold structure (non-Euclidean). Papers about it are scarce in the literature, the majority of them are restricted to the case of a single explanatory variable, and many of them are based on the approximated tangent space. In this paper, there are several methodological innovations. The first one is the adaptation of the general method for kernel regression analysis in manifold-valued data to the three-dimensional case of Kendall’s shape space. The second one is its generalization to the multivariate case and the addressing of the curse-of-dimensionality problem. Finally, we propose bootstrap confidence intervals for prediction. A simulation study is carried out to check the goodness of the procedure, and a comparison with a current approach is performed. Then, it is applied to a 3D database obtained from an anthropometric survey of the Spanish child population with a potential application to online sales of children’s wear. Full Article
i Rejoinder: Bayes, Oracle Bayes, and Empirical Bayes By projecteuclid.org Published On :: Thu, 18 Jul 2019 22:01 EDT Bradley Efron. Source: Statistical Science, Volume 34, Number 2, 234--235. Full Article
i Comment: Variational Autoencoders as Empirical Bayes By projecteuclid.org Published On :: Thu, 18 Jul 2019 22:01 EDT Yixin Wang, Andrew C. Miller, David M. Blei. Source: Statistical Science, Volume 34, Number 2, 229--233. Full Article
i Comment: Empirical Bayes, Compound Decisions and Exchangeability By projecteuclid.org Published On :: Thu, 18 Jul 2019 22:01 EDT Eitan Greenshtein, Ya’acov Ritov. Source: Statistical Science, Volume 34, Number 2, 224--228.Abstract: We present some personal reflections on empirical Bayes/ compound decision (EB/CD) theory following Efron (2019). In particular, we consider the role of exchangeability in the EB/CD theory and how it can be achieved when there are covariates. We also discuss the interpretation of EB/CD confidence interval, the theoretical efficiency of the CD procedure, and the impact of sparsity assumptions. Full Article
i Comment: Empirical Bayes Interval Estimation By projecteuclid.org Published On :: Thu, 18 Jul 2019 22:01 EDT Wenhua Jiang. Source: Statistical Science, Volume 34, Number 2, 219--223.Abstract: This is a contribution to the discussion of the enlightening paper by Professor Efron. We focus on empirical Bayes interval estimation. We discuss the oracle interval estimation rules, the empirical Bayes estimation of the oracle rule and the computation. Some numerical results are reported. Full Article
i Comment: Bayes, Oracle Bayes and Empirical Bayes By projecteuclid.org Published On :: Thu, 18 Jul 2019 22:01 EDT Aad van der Vaart. Source: Statistical Science, Volume 34, Number 2, 214--218. Full Article
i Comment: Minimalist $g$-Modeling By projecteuclid.org Published On :: Thu, 18 Jul 2019 22:01 EDT Roger Koenker, Jiaying Gu. Source: Statistical Science, Volume 34, Number 2, 209--213.Abstract: Efron’s elegant approach to $g$-modeling for empirical Bayes problems is contrasted with an implementation of the Kiefer–Wolfowitz nonparametric maximum likelihood estimator for mixture models for several examples. The latter approach has the advantage that it is free of tuning parameters and consequently provides a relatively simple complementary method. Full Article
i Comment: Bayes, Oracle Bayes, and Empirical Bayes By projecteuclid.org Published On :: Thu, 18 Jul 2019 22:01 EDT Nan Laird. Source: Statistical Science, Volume 34, Number 2, 206--208. Full Article
i Comment: Bayes, Oracle Bayes, and Empirical Bayes By projecteuclid.org Published On :: Thu, 18 Jul 2019 22:01 EDT Thomas A. Louis. Source: Statistical Science, Volume 34, Number 2, 202--205. Full Article
i Bayes, Oracle Bayes and Empirical Bayes By projecteuclid.org Published On :: Thu, 18 Jul 2019 22:01 EDT Bradley Efron. Source: Statistical Science, Volume 34, Number 2, 177--201.Abstract: This article concerns the Bayes and frequentist aspects of empirical Bayes inference. Some of the ideas explored go back to Robbins in the 1950s, while others are current. Several examples are discussed, real and artificial, illustrating the two faces of empirical Bayes methodology: “oracle Bayes” shows empirical Bayes in its most frequentist mode, while “finite Bayes inference” is a fundamentally Bayesian application. In either case, modern theory and computation allow us to present a sharp finite-sample picture of what is at stake in an empirical Bayes analysis. Full Article
i A Conversation with Dick Dudley By projecteuclid.org Published On :: Fri, 12 Apr 2019 04:00 EDT Vladimir Koltchinskii, Richard Nickl, Philippe Rigollet. Source: Statistical Science, Volume 34, Number 1, 169--175.Abstract: Richard Mansfield Dudley (Dick Dudley) was born in 1938. He received the A.B. from Harvard in 1952 and the Ph.D. from Princeton in 1962 (under the supervision of Gilbert Hunt and Edward Nelson). Following an appointment at UC Berkeley as an assistant professor, he joined the Department of Mathematics at MIT in 1967. Dick Dudley has made fundamental contributions to the theory of Gaussian processes and Probability in Banach Spaces. Among his major achievements is the development of a general framework for empirical processes theory, in particular, for uniform central limit theorems. These results have had and continue having tremendous impact in contemporary statistics and in mathematical foundations of machine learning. A more extensive biographical sketch is contained in the preface to the Selected works of R. M. Dudley (editors: E. Giné, V. Koltchinskii and R. Norvaisa) published in 2010. This conversation took place (mostly, via email) in the fall of 2017. Full Article
i A Conversation with Piet Groeneboom By projecteuclid.org Published On :: Fri, 12 Apr 2019 04:00 EDT Geurt Jongbloed. Source: Statistical Science, Volume 34, Number 1, 156--168.Abstract: Petrus (Piet) Groeneboom was born in Scheveningen in 1941 and grew up in Voorburg. Both villages are located near The Hague in The Netherlands; Scheveningen actually being part of The Hague. He attended the gymnasium of the Huygens lyceum. In 1959, he entered the University of Amsterdam, where he studied psychology. After his “candidate” exam (comparable to BSc) in 1963, he worked at the psychological laboratory of the University of Amsterdam until 1966. In 1965, he took up mathematics as a part-time study. After having obtained his master’s degree in 1971, he had a position at the psychological laboratory again until 1973, when he was appointed to the Mathematical Center in Amsterdam. There, he wrote between 1975 and 1979 his Ph.D. thesis with Kobus Oosterhoff as advisor, graduating in 1979. After a period of two years as visiting professor at the University of Washington (UW) in Seattle, Piet moved back to the Mathematical Center until he was appointed full professor of statistics at the University of Amsterdam in 1984. Four years later, he moved to Delft University of Technology where he became professor of statistics and stayed until his retirement in 2006. Between 2000 and 2006 he also held a part-time professorship at the Vrije Universiteit in Amsterdam. From 1999 till 2013 he was Affiliate Professor at the statistics department of UW, Seattle. Apart from being visiting professor at the UW in Seattle, he was also visiting professor at Stanford University, Université Paris 6 and ETH Zürich. Piet is well known for his work on shape constrained statistical inference. He worked on asymptotic theory for these problems, created algorithms to compute nonparametric estimates in such models and applied these models to real data. He also worked on interacting particle systems, extreme value analysis and efficiency theory for testing procedures. Piet (co-)authored four books and 64 papers and served as promotor of 13 students. He is the recipient of the 1985 Rollo Davidson prize, a fellow of the IMS and elected member of the ISI. In 2015, he delivered the Wald lecture at the Joint Statistical Meeting in Montreal. Piet and his wife Marijke live in Naarden. He has two sons, Thomas and Tim, and (since June 12, 2018) one grandson, Tarik. This conversation was held at Piet’s house in Naarden, on February 28 and April 24, 2018. Full Article
i Generalized Multiple Importance Sampling By projecteuclid.org Published On :: Fri, 12 Apr 2019 04:00 EDT Víctor Elvira, Luca Martino, David Luengo, Mónica F. Bugallo. Source: Statistical Science, Volume 34, Number 1, 129--155.Abstract: Importance sampling (IS) methods are broadly used to approximate posterior distributions or their moments. In the standard IS approach, samples are drawn from a single proposal distribution and weighted adequately. However, since the performance in IS depends on the mismatch between the targeted and the proposal distributions, several proposal densities are often employed for the generation of samples. Under this multiple importance sampling (MIS) scenario, extensive literature has addressed the selection and adaptation of the proposal distributions, interpreting the sampling and weighting steps in different ways. In this paper, we establish a novel general framework with sampling and weighting procedures when more than one proposal is available. The new framework encompasses most relevant MIS schemes in the literature, and novel valid schemes appear naturally. All the MIS schemes are compared and ranked in terms of the variance of the associated estimators. Finally, we provide illustrative examples revealing that, even with a good choice of the proposal densities, a careful interpretation of the sampling and weighting procedures can make a significant difference in the performance of the method. Full Article
i Gaussian Integrals and Rice Series in Crossing Distributions—to Compute the Distribution of Maxima and Other Features of Gaussian Processes By projecteuclid.org Published On :: Fri, 12 Apr 2019 04:00 EDT Georg Lindgren. Source: Statistical Science, Volume 34, Number 1, 100--128.Abstract: We describe and compare how methods based on the classical Rice’s formula for the expected number, and higher moments, of level crossings by a Gaussian process stand up to contemporary numerical methods to accurately deal with crossing related characteristics of the sample paths. We illustrate the relative merits in accuracy and computing time of the Rice moment methods and the exact numerical method, developed since the late 1990s, on three groups of distribution problems, the maximum over a finite interval and the waiting time to first crossing, the length of excursions over a level, and the joint period/amplitude of oscillations. We also treat the notoriously difficult problem of dependence between successive zero crossing distances. The exact solution has been known since at least 2000, but it has remained largely unnoticed outside the ocean science community. Extensive simulation studies illustrate the accuracy of the numerical methods. As a historical introduction an attempt is made to illustrate the relation between Rice’s original formulation and arguments and the exact numerical methods. Full Article
i Rejoinder: Response to Discussions and a Look Ahead By projecteuclid.org Published On :: Fri, 12 Apr 2019 04:00 EDT Vincent Dorie, Jennifer Hill, Uri Shalit, Marc Scott, Dan Cervone. Source: Statistical Science, Volume 34, Number 1, 94--99.Abstract: Response to discussion of Dorie (2017), in which the authors of that piece express their gratitude to the discussants, rebut some specific criticisms, and argue that the limitations of the 2016 Atlantic Causal Inference Competition represent an exciting opportunity for future competitions in a similar mold. Full Article
i Comment: Contributions of Model Features to BART Causal Inference Performance Using ACIC 2016 Competition Data By projecteuclid.org Published On :: Fri, 12 Apr 2019 04:00 EDT Nicole Bohme Carnegie. Source: Statistical Science, Volume 34, Number 1, 90--93.Abstract: With a thorough exposition of the methods and results of the 2016 Atlantic Causal Inference Competition, Dorie et al. have set a new standard for reproducibility and comparability of evaluations of causal inference methods. In particular, the open-source R package aciccomp2016, which permits reproduction of all datasets used in the competition, will be an invaluable resource for evaluation of future methodological developments. Building upon results from Dorie et al., we examine whether a set of potential modifications to Bayesian Additive Regression Trees (BART)—multiple chains in model fitting, using the propensity score as a covariate, targeted maximum likelihood estimation (TMLE), and computing symmetric confidence intervals—have a stronger impact on bias, RMSE, and confidence interval coverage in combination than they do alone. We find that bias in the estimate of SATT is minimal, regardless of the BART formulation. For purposes of CI coverage, however, all proposed modifications are beneficial—alone and in combination—but use of TMLE is least beneficial for coverage and results in considerably wider confidence intervals. Full Article
i Comment: Causal Inference Competitions: Where Should We Aim? By projecteuclid.org Published On :: Fri, 12 Apr 2019 04:00 EDT Ehud Karavani, Tal El-Hay, Yishai Shimoni, Chen Yanover. Source: Statistical Science, Volume 34, Number 1, 86--89.Abstract: Data competitions proved to be highly beneficial to the field of machine learning, and thus expected to provide similar advantages in the field of causal inference. As participants in the 2016 and 2017 Atlantic Causal Inference Conference (ACIC) data competitions and co-organizers of the 2018 competition, we discuss the strengths of simulation-based competitions and suggest potential extensions to address their limitations. These suggested augmentations aim at making the data generating processes more realistic and gradually increase in complexity, allowing thorough investigations of algorithms’ performance. We further outline a community-wide competition framework to evaluate an end-to-end causal inference pipeline, beginning with a causal question and a database, and ending with causal estimates. Full Article
i Comment on “Automated Versus Do-It-Yourself Methods for Causal Inference: Lessons Learned from a Data Analysis Competition” By projecteuclid.org Published On :: Fri, 12 Apr 2019 04:00 EDT Susan Gruber, Mark J. van der Laan. Source: Statistical Science, Volume 34, Number 1, 82--85.Abstract: Dorie and co-authors (DHSSC) are to be congratulated for initiating the ACIC Data Challenge. Their project engaged the community and accelerated research by providing a level playing field for comparing the performance of a priori specified algorithms. DHSSC identified themes concerning characteristics of the DGP, properties of the estimators, and inference. We discuss these themes in the context of targeted learning. Full Article
i Matching Methods for Causal Inference: A Review and a Look Forward By projecteuclid.org Published On :: Thu, 05 Aug 2010 15:41 EDT Elizabeth A. StuartSource: Statist. Sci., Volume 25, Number 1, 1--21.Abstract: When estimating causal effects using observational data, it is desirable to replicate a randomized experiment as closely as possible by obtaining treated and control groups with similar covariate distributions. This goal can often be achieved by choosing well-matched samples of the original treated and control groups, thereby reducing bias due to the covariates. Since the 1970s, work on matching methods has examined how to best choose treated and control subjects for comparison. Matching methods are gaining popularity in fields such as economics, epidemiology, medicine and political science. However, until now the literature and related advice has been scattered across disciplines. Researchers who are interested in using matching methods—or developing methods related to matching—do not have a single place to turn to learn about past and current research. This paper provides a structure for thinking about matching methods and guidance on their use, coalescing the existing research (both old and new) and providing a summary of where the literature on matching methods is now and where it should be headed. Full Article
i Heteromodal Cortical Areas Encode Sensory-Motor Features of Word Meaning By www.jneurosci.org Published On :: 2016-09-21T09:33:18-07:00 The capacity to process information in conceptual form is a fundamental aspect of human cognition, yet little is known about how this type of information is encoded in the brain. Although the role of sensory and motor cortical areas has been a focus of recent debate, neuroimaging studies of concept representation consistently implicate a network of heteromodal areas that seem to support concept retrieval in general rather than knowledge related to any particular sensory-motor content. We used predictive machine learning on fMRI data to investigate the hypothesis that cortical areas in this "general semantic network" (GSN) encode multimodal information derived from basic sensory-motor processes, possibly functioning as convergence–divergence zones for distributed concept representation. An encoding model based on five conceptual attributes directly related to sensory-motor experience (sound, color, shape, manipulability, and visual motion) was used to predict brain activation patterns associated with individual lexical concepts in a semantic decision task. When the analysis was restricted to voxels in the GSN, the model was able to identify the activation patterns corresponding to individual concrete concepts significantly above chance. In contrast, a model based on five perceptual attributes of the word form performed at chance level. This pattern was reversed when the analysis was restricted to areas involved in the perceptual analysis of written word forms. These results indicate that heteromodal areas involved in semantic processing encode information about the relative importance of different sensory-motor attributes of concepts, possibly by storing particular combinations of sensory and motor features. SIGNIFICANCE STATEMENT The present study used a predictive encoding model of word semantics to decode conceptual information from neural activity in heteromodal cortical areas. The model is based on five sensory-motor attributes of word meaning (color, shape, sound, visual motion, and manipulability) and encodes the relative importance of each attribute to the meaning of a word. This is the first demonstration that heteromodal areas involved in semantic processing can discriminate between different concepts based on sensory-motor information alone. This finding indicates that the brain represents concepts as multimodal combinations of sensory and motor representations. Full Article
i Smart women don't smoke / Biman Mullick. By search.wellcomelibrary.org Published On :: London (33 Stillness Road, London SE23 1NG) : Cleanair, Campaign for a Smoke-free Environment, [1989?] Full Article