vi Formation and control of biofilm in various environments By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Author: Kanematsu, Hideyuki, authorCallnumber: OnlineISBN: 9789811522406 (electronic bk.) Full Article
vi Enterprise information systems : 21st International Conference, ICEIS 2019, Heraklion, Crete, Greece, May 3-5, 2019, Revised Selected Papers By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Author: International Conference on Enterprise Information Systems (21st : 2019 : Ērakleion, Greece)Callnumber: OnlineISBN: 9783030407834 (electronic bk.) Full Article
vi Emerging and transboundary animal viruses By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9789811504020 (electronic bk.) Full Article
vi Dynamics of immune activation in viral diseases By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9789811510458 (electronic bk.) Full Article
vi Development of biopharmaceutical drug-device products By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9783030314156 (electronic bk.) Full Article
vi DICTIONARY OF CONSTRUCTION, SURVEYING, AND CIVIL ENGINEERING By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9780192568632 (electronic bk.) Full Article
vi Current microbiological research in Africa : selected applications for sustainable environmental management By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9783030352967 (electronic bk.) Full Article
vi Crafting qualitative research : beyond positivist traditions By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Author: Prasad, Pushkala, author.Callnumber: OnlineISBN: 9781315715070 (e-book) Full Article
vi Computer security : ESORICS 2019 International Workshops, IOSec, MSTEC, and FINSEC, Luxembourg City, Luxembourg, September 26-27, 2019, Revised Selected Papers By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Author: European Symposium on Research in Computer Security (24th : 2019 : Luxembourg, Luxembourg)Callnumber: OnlineISBN: 9783030420512 (electronic bk.) Full Article
vi Complete denture prosthodontics : treatment and problem solving By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9783319690179 (electronic bk.) Full Article
vi Common problems in the newborn nursery : an evidence and case-based guide By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9783319956725 (electronic bk.) Full Article
vi Beyond our genes : pathophysiology of gene and environment interaction and epigenetic inheritance By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9783030352134 (electronic bk.) Full Article
vi Aquatic biopolymers : understanding their industrial significance and environmental implications By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Author: Olatunji, Ololade.Callnumber: OnlineISBN: 9783030347093 (electronic bk.) Full Article
vi Advances in virus research. By dal.novanet.ca Published On :: Fri, 1 May 2020 19:44:43 -0300 Callnumber: OnlineISBN: 9780123850348 (electronic bk.) Full Article
vi COVID-19 Update By www.eastgwillimbury.ca Published On :: Wed, 06 May 2020 18:48:44 GMT Full Article
vi In Battle to Fight Coronavirus Pandemic, LeadingAge Nursing Home... By www.prweb.com Published On :: Aging Services Providers Dedicated to Fulfilling Their Critical Role in Public Health System(PRWeb April 18, 2020)Read the full story at https://www.prweb.com/releases/in_battle_to_fight_coronavirus_pandemic_leadingage_nursing_home_members_support_texas_action_to_gather_and_leverage_data/prweb17055806.htm Full Article
vi New Partnerships Emerge for COVID-19 Relief: Dade County Farm Bureau... By www.prweb.com Published On :: Harvested produce crops feed Florida Department of Corrections’ (FDC) more than 87,000 inmates; action saves food costs while reducing COVID-19 related supply chain impacts.(PRWeb April 20, 2020)Read the full story at https://www.prweb.com/releases/new_partnerships_emerge_for_covid_19_relief_dade_county_farm_bureau_teams_with_state_leaders_to_launch_farm_to_inmate_program/prweb17052045.htm Full Article
vi AgileAssets v7.5 Improves Flexibility, Field Productivity for Tunnel... By www.prweb.com Published On :: Web and mobile applications enhance efficiency and data accuracy using satellite maps and offline capabilities.(PRWeb April 23, 2020)Read the full story at https://www.prweb.com/releases/agileassets_v7_5_improves_flexibility_field_productivity_for_tunnel_inspections_asset_maintenance/prweb17071093.htm Full Article
vi Viable Policy Pathways Expand Access to Renewable Energy for... By www.prweb.com Published On :: Newly launched REBA Institute shares research suggesting multiple policy pathways increase access, lower costs and drive decarbonization of the electricity sector.(PRWeb May 05, 2020)Read the full story at https://www.prweb.com/releases/viable_policy_pathways_expand_access_to_renewable_energy_for_commercial_industrial_sector/prweb17099869.htm Full Article
vi Averages of unlabeled networks: Geometric characterization and asymptotic behavior By projecteuclid.org Published On :: Mon, 17 Feb 2020 04:02 EST Eric D. Kolaczyk, Lizhen Lin, Steven Rosenberg, Jackson Walters, Jie Xu. Source: The Annals of Statistics, Volume 48, Number 1, 514--538.Abstract: It is becoming increasingly common to see large collections of network data objects, that is, data sets in which a network is viewed as a fundamental unit of observation. As a result, there is a pressing need to develop network-based analogues of even many of the most basic tools already standard for scalar and vector data. In this paper, our focus is on averages of unlabeled, undirected networks with edge weights. Specifically, we (i) characterize a certain notion of the space of all such networks, (ii) describe key topological and geometric properties of this space relevant to doing probability and statistics thereupon, and (iii) use these properties to establish the asymptotic behavior of a generalized notion of an empirical mean under sampling from a distribution supported on this space. Our results rely on a combination of tools from geometry, probability theory and statistical shape analysis. In particular, the lack of vertex labeling necessitates working with a quotient space modding out permutations of labels. This results in a nontrivial geometry for the space of unlabeled networks, which in turn is found to have important implications on the types of probabilistic and statistical results that may be obtained and the techniques needed to obtain them. Full Article
vi Consistent selection of the number of change-points via sample-splitting By projecteuclid.org Published On :: Mon, 17 Feb 2020 04:02 EST Changliang Zou, Guanghui Wang, Runze Li. Source: The Annals of Statistics, Volume 48, Number 1, 413--439.Abstract: In multiple change-point analysis, one of the major challenges is to estimate the number of change-points. Most existing approaches attempt to minimize a Schwarz information criterion which balances a term quantifying model fit with a penalization term accounting for model complexity that increases with the number of change-points and limits overfitting. However, different penalization terms are required to adapt to different contexts of multiple change-point problems and the optimal penalization magnitude usually varies from the model and error distribution. We propose a data-driven selection criterion that is applicable to most kinds of popular change-point detection methods, including binary segmentation and optimal partitioning algorithms. The key idea is to select the number of change-points that minimizes the squared prediction error, which measures the fit of a specified model for a new sample. We develop a cross-validation estimation scheme based on an order-preserved sample-splitting strategy, and establish its asymptotic selection consistency under some mild conditions. Effectiveness of the proposed selection criterion is demonstrated on a variety of numerical experiments and real-data examples. Full Article
vi Semi-supervised inference: General theory and estimation of means By projecteuclid.org Published On :: Fri, 02 Aug 2019 22:04 EDT Anru Zhang, Lawrence D. Brown, T. Tony Cai. Source: The Annals of Statistics, Volume 47, Number 5, 2538--2566.Abstract: We propose a general semi-supervised inference framework focused on the estimation of the population mean. As usual in semi-supervised settings, there exists an unlabeled sample of covariate vectors and a labeled sample consisting of covariate vectors along with real-valued responses (“labels”). Otherwise, the formulation is “assumption-lean” in that no major conditions are imposed on the statistical or functional form of the data. We consider both the ideal semi-supervised setting where infinitely many unlabeled samples are available, as well as the ordinary semi-supervised setting in which only a finite number of unlabeled samples is available. Estimators are proposed along with corresponding confidence intervals for the population mean. Theoretical analysis on both the asymptotic distribution and $ell_{2}$-risk for the proposed procedures are given. Surprisingly, the proposed estimators, based on a simple form of the least squares method, outperform the ordinary sample mean. The simple, transparent form of the estimator lends confidence to the perception that its asymptotic improvement over the ordinary sample mean also nearly holds even for moderate size samples. The method is further extended to a nonparametric setting, in which the oracle rate can be achieved asymptotically. The proposed estimators are further illustrated by simulation studies and a real data example involving estimation of the homeless population. Full Article
vi Correction: Sensitivity analysis for an unobserved moderator in RCT-to-target-population generalization of treatment effects By projecteuclid.org Published On :: Wed, 15 Apr 2020 22:05 EDT Trang Quynh Nguyen, Elizabeth A. Stuart. Source: The Annals of Applied Statistics, Volume 14, Number 1, 518--520. Full Article
vi Measuring human activity spaces from GPS data with density ranking and summary curves By projecteuclid.org Published On :: Wed, 15 Apr 2020 22:05 EDT Yen-Chi Chen, Adrian Dobra. Source: The Annals of Applied Statistics, Volume 14, Number 1, 409--432.Abstract: Activity spaces are fundamental to the assessment of individuals’ dynamic exposure to social and environmental risk factors associated with multiple spatial contexts that are visited during activities of daily living. In this paper we survey existing approaches for measuring the geometry, size and structure of activity spaces, based on GPS data, and explain their limitations. We propose addressing these shortcomings through a nonparametric approach called density ranking and also through three summary curves: the mass-volume curve, the Betti number curve and the persistence curve. We introduce a novel mixture model for human activity spaces and study its asymptotic properties. We prove that the kernel density estimator, which at the present time, is one of the most widespread methods for measuring activity spaces, is not a stable estimator of their structure. We illustrate the practical value of our methods with a simulation study and with a recently collected GPS dataset that comprises the locations visited by 10 individuals over a six months period. Full Article
vi Estimating the health effects of environmental mixtures using Bayesian semiparametric regression and sparsity inducing priors By projecteuclid.org Published On :: Wed, 15 Apr 2020 22:05 EDT Joseph Antonelli, Maitreyi Mazumdar, David Bellinger, David Christiani, Robert Wright, Brent Coull. Source: The Annals of Applied Statistics, Volume 14, Number 1, 257--275.Abstract: Humans are routinely exposed to mixtures of chemical and other environmental factors, making the quantification of health effects associated with environmental mixtures a critical goal for establishing environmental policy sufficiently protective of human health. The quantification of the effects of exposure to an environmental mixture poses several statistical challenges. It is often the case that exposure to multiple pollutants interact with each other to affect an outcome. Further, the exposure-response relationship between an outcome and some exposures, such as some metals, can exhibit complex, nonlinear forms, since some exposures can be beneficial and detrimental at different ranges of exposure. To estimate the health effects of complex mixtures, we propose a flexible Bayesian approach that allows exposures to interact with each other and have nonlinear relationships with the outcome. We induce sparsity using multivariate spike and slab priors to determine which exposures are associated with the outcome and which exposures interact with each other. The proposed approach is interpretable, as we can use the posterior probabilities of inclusion into the model to identify pollutants that interact with each other. We utilize our approach to study the impact of exposure to metals on child neurodevelopment in Bangladesh and find a nonlinear, interactive relationship between arsenic and manganese. Full Article
vi Surface temperature monitoring in liver procurement via functional variance change-point analysis By projecteuclid.org Published On :: Wed, 15 Apr 2020 22:05 EDT Zhenguo Gao, Pang Du, Ran Jin, John L. Robertson. Source: The Annals of Applied Statistics, Volume 14, Number 1, 143--159.Abstract: Liver procurement experiments with surface-temperature monitoring motivated Gao et al. ( J. Amer. Statist. Assoc. 114 (2019) 773–781) to develop a variance change-point detection method under a smoothly-changing mean trend. However, the spotwise change points yielded from their method do not offer immediate information to surgeons since an organ is often transplanted as a whole or in part. We develop a new practical method that can analyze a defined portion of the organ surface at a time. It also provides a novel addition to the developing field of functional data monitoring. Furthermore, numerical challenge emerges for simultaneously modeling the variance functions of 2D locations and the mean function of location and time. The respective sample sizes in the scales of 10,000 and 1,000,000 for modeling these functions make standard spline estimation too costly to be useful. We introduce a multistage subsampling strategy with steps educated by quickly-computable preliminary statistical measures. Extensive simulations show that the new method can efficiently reduce the computational cost and provide reasonable parameter estimates. Application of the new method to our liver surface temperature monitoring data shows its effectiveness in providing accurate status change information for a selected portion of the organ in the experiment. Full Article
vi Integrative survival analysis with uncertain event times in application to a suicide risk study By projecteuclid.org Published On :: Wed, 15 Apr 2020 22:05 EDT Wenjie Wang, Robert Aseltine, Kun Chen, Jun Yan. Source: The Annals of Applied Statistics, Volume 14, Number 1, 51--73.Abstract: The concept of integrating data from disparate sources to accelerate scientific discovery has generated tremendous excitement in many fields. The potential benefits from data integration, however, may be compromised by the uncertainty due to incomplete/imperfect record linkage. Motivated by a suicide risk study, we propose an approach for analyzing survival data with uncertain event times arising from data integration. Specifically, in our problem deaths identified from the hospital discharge records together with reported suicidal deaths determined by the Office of Medical Examiner may still not include all the death events of patients, and the missing deaths can be recovered from a complete database of death records. Since the hospital discharge data can only be linked to the death record data by matching basic patient characteristics, a patient with a censored death time from the first dataset could be linked to multiple potential event records in the second dataset. We develop an integrative Cox proportional hazards regression in which the uncertainty in the matched event times is modeled probabilistically. The estimation procedure combines the ideas of profile likelihood and the expectation conditional maximization algorithm (ECM). Simulation studies demonstrate that under realistic settings of imperfect data linkage the proposed method outperforms several competing approaches including multiple imputation. A marginal screening analysis using the proposed integrative Cox model is performed to identify risk factors associated with death following suicide-related hospitalization in Connecticut. The identified diagnostics codes are consistent with existing literature and provide several new insights on suicide risk, prediction and prevention. Full Article
vi A general theory for preferential sampling in environmental networks By projecteuclid.org Published On :: Wed, 27 Nov 2019 22:01 EST Joe Watson, James V. Zidek, Gavin Shaddick. Source: The Annals of Applied Statistics, Volume 13, Number 4, 2662--2700.Abstract: This paper presents a general model framework for detecting the preferential sampling of environmental monitors recording an environmental process across space and/or time. This is achieved by considering the joint distribution of an environmental process with a site-selection process that considers where and when sites are placed to measure the process. The environmental process may be spatial, temporal or spatio-temporal in nature. By sharing random effects between the two processes, the joint model is able to establish whether site placement was stochastically dependent of the environmental process under study. Furthermore, if stochastic dependence is identified between the two processes, then inferences about the probability distribution of the spatio-temporal process will change, as will predictions made of the process across space and time. The embedding into a spatio-temporal framework also allows for the modelling of the dynamic site-selection process itself. Real-world factors affecting both the size and location of the network can be easily modelled and quantified. Depending upon the choice of the population of locations considered for selection across space and time under the site-selection process, different insights about the precise nature of preferential sampling can be obtained. The general framework developed in the paper is designed to be easily and quickly fit using the R-INLA package. We apply this framework to a case study involving particulate air pollution over the UK where a major reduction in the size of a monitoring network through time occurred. It is demonstrated that a significant response-biased reduction in the air quality monitoring network occurred, namely the relocation of monitoring sites to locations with the highest pollution levels, and the routine removal of sites at locations with the lowest. We also show that the network was consistently unrepresenting levels of particulate matter seen across much of GB throughout the operating life of the network. Finally we show that this may have led to a severe overreporting of the population-average exposure levels experienced across GB. This could have great impacts on estimates of the health effects of black smoke levels. Full Article
vi Hierarchical infinite factor models for improving the prediction of surgical complications for geriatric patients By projecteuclid.org Published On :: Wed, 27 Nov 2019 22:01 EST Elizabeth Lorenzi, Ricardo Henao, Katherine Heller. Source: The Annals of Applied Statistics, Volume 13, Number 4, 2637--2661.Abstract: Nearly a third of all surgeries performed in the United States occur for patients over the age of 65; these older adults experience a higher rate of postoperative morbidity and mortality. To improve the care for these patients, we aim to identify and characterize high risk geriatric patients to send to a specialized perioperative clinic while leveraging the overall surgical population to improve learning. To this end, we develop a hierarchical infinite latent factor model (HIFM) to appropriately account for the covariance structure across subpopulations in data. We propose a novel Hierarchical Dirichlet Process shrinkage prior on the loadings matrix that flexibly captures the underlying structure of our data while sharing information across subpopulations to improve inference and prediction. The stick-breaking construction of the prior assumes an infinite number of factors and allows for each subpopulation to utilize different subsets of the factor space and select the number of factors needed to best explain the variation. We develop the model into a latent factor regression method that excels at prediction and inference of regression coefficients. Simulations validate this strong performance compared to baseline methods. We apply this work to the problem of predicting surgical complications using electronic health record data for geriatric patients and all surgical patients at Duke University Health System (DUHS). The motivating application demonstrates the improved predictive performance when using HIFM in both area under the ROC curve and area under the PR Curve while providing interpretable coefficients that may lead to actionable interventions. Full Article
vi Fitting a deeply nested hierarchical model to a large book review dataset using a moment-based estimator By projecteuclid.org Published On :: Wed, 27 Nov 2019 22:01 EST Ningshan Zhang, Kyle Schmaus, Patrick O. Perry. Source: The Annals of Applied Statistics, Volume 13, Number 4, 2260--2288.Abstract: We consider a particular instance of a common problem in recommender systems, using a database of book reviews to inform user-targeted recommendations. In our dataset, books are categorized into genres and subgenres. To exploit this nested taxonomy, we use a hierarchical model that enables information pooling across across similar items at many levels within the genre hierarchy. The main challenge in deploying this model is computational. The data sizes are large and fitting the model at scale using off-the-shelf maximum likelihood procedures is prohibitive. To get around this computational bottleneck, we extend a moment-based fitting procedure proposed for fitting single-level hierarchical models to the general case of arbitrarily deep hierarchies. This extension is an order of magnitude faster than standard maximum likelihood procedures. The fitting method can be deployed beyond recommender systems to general contexts with deeply nested hierarchical generalized linear mixed models. Full Article
vi Statistical inference for partially observed branching processes with application to cell lineage tracking of in vivo hematopoiesis By projecteuclid.org Published On :: Wed, 27 Nov 2019 22:01 EST Jason Xu, Samson Koelle, Peter Guttorp, Chuanfeng Wu, Cynthia Dunbar, Janis L. Abkowitz, Vladimir N. Minin. Source: The Annals of Applied Statistics, Volume 13, Number 4, 2091--2119.Abstract: Single-cell lineage tracking strategies enabled by recent experimental technologies have produced significant insights into cell fate decisions, but lack the quantitative framework necessary for rigorous statistical analysis of mechanistic models describing cell division and differentiation. In this paper, we develop such a framework with corresponding moment-based parameter estimation techniques for continuous-time, multi-type branching processes. Such processes provide a probabilistic model of how cells divide and differentiate, and we apply our method to study hematopoiesis , the mechanism of blood cell production. We derive closed-form expressions for higher moments in a general class of such models. These analytical results allow us to efficiently estimate parameters of much richer statistical models of hematopoiesis than those used in previous statistical studies. To our knowledge, the method provides the first rate inference procedure for fitting such models to time series data generated from cellular barcoding experiments. After validating the methodology in simulation studies, we apply our estimator to hematopoietic lineage tracking data from rhesus macaques. Our analysis provides a more complete understanding of cell fate decisions during hematopoiesis in nonhuman primates, which may be more relevant to human biology and clinical strategies than previous findings from murine studies. For example, in addition to previously estimated hematopoietic stem cell self-renewal rate, we are able to estimate fate decision probabilities and to compare structurally distinct models of hematopoiesis using cross validation. These estimates of fate decision probabilities and our model selection results should help biologists compare competing hypotheses about how progenitor cells differentiate. The methodology is transferrable to a large class of stochastic compartmental and multi-type branching models, commonly used in studies of cancer progression, epidemiology and many other fields. Full Article
vi Estimating abundance from multiple sampling capture-recapture data via a multi-state multi-period stopover model By projecteuclid.org Published On :: Wed, 27 Nov 2019 22:01 EST Hannah Worthington, Rachel McCrea, Ruth King, Richard Griffiths. Source: The Annals of Applied Statistics, Volume 13, Number 4, 2043--2064.Abstract: Capture-recapture studies often involve collecting data on numerous capture occasions over a relatively short period of time. For many study species this process is repeated, for example, annually, resulting in capture information spanning multiple sampling periods. To account for the different temporal scales, the robust design class of models have traditionally been applied providing a framework in which to analyse all of the available capture data in a single likelihood expression. However, these models typically require strong constraints, either the assumption of closure within a sampling period (the closed robust design) or conditioning on the number of individuals captured within a sampling period (the open robust design). For real datasets these assumptions may not be appropriate. We develop a general modelling structure that requires neither assumption by explicitly modelling the movement of individuals into the population both within and between the sampling periods, which in turn permits the estimation of abundance within a single consistent framework. The flexibility of the novel model structure is further demonstrated by including the computationally challenging case of multi-state data where there is individual time-varying discrete covariate information. We derive an efficient likelihood expression for the new multi-state multi-period stopover model using the hidden Markov model framework. We demonstrate the significant improvement in parameter estimation using our new modelling approach in terms of both the multi-period and multi-state components through both a simulation study and a real dataset relating to the protected species of great crested newts, Triturus cristatus . Full Article
vi Estimating the rate constant from biosensor data via an adaptive variational Bayesian approach By projecteuclid.org Published On :: Wed, 27 Nov 2019 22:01 EST Ye Zhang, Zhigang Yao, Patrik Forssén, Torgny Fornstedt. Source: The Annals of Applied Statistics, Volume 13, Number 4, 2011--2042.Abstract: The means to obtain the rate constants of a chemical reaction is a fundamental open problem in both science and the industry. Traditional techniques for finding rate constants require either chemical modifications of the reactants or indirect measurements. The rate constant map method is a modern technique to study binding equilibrium and kinetics in chemical reactions. Finding a rate constant map from biosensor data is an ill-posed inverse problem that is usually solved by regularization. In this work, rather than finding a deterministic regularized rate constant map that does not provide uncertainty quantification of the solution, we develop an adaptive variational Bayesian approach to estimate the distribution of the rate constant map, from which some intrinsic properties of a chemical reaction can be explored, including information about rate constants. Our new approach is more realistic than the existing approaches used for biosensors and allows us to estimate the dynamics of the interactions, which are usually hidden in a deterministic approximate solution. We verify the performance of the new proposed method by numerical simulations, and compare it with the Markov chain Monte Carlo algorithm. The results illustrate that the variational method can reliably capture the posterior distribution in a computationally efficient way. Finally, the developed method is also tested on the real biosensor data (parathyroid hormone), where we provide two novel analysis tools—the thresholding contour map and the high order moment map—to estimate the number of interactions as well as their rate constants. Full Article
vi Oblique random survival forests By projecteuclid.org Published On :: Wed, 16 Oct 2019 22:03 EDT Byron C. Jaeger, D. Leann Long, Dustin M. Long, Mario Sims, Jeff M. Szychowski, Yuan-I Min, Leslie A. Mcclure, George Howard, Noah Simon. Source: The Annals of Applied Statistics, Volume 13, Number 3, 1847--1883.Abstract: We introduce and evaluate the oblique random survival forest (ORSF). The ORSF is an ensemble method for right-censored survival data that uses linear combinations of input variables to recursively partition a set of training data. Regularized Cox proportional hazard models are used to identify linear combinations of input variables in each recursive partitioning step. Benchmark results using simulated and real data indicate that the ORSF’s predicted risk function has high prognostic value in comparison to random survival forests, conditional inference forests, regression and boosting. In an application to data from the Jackson Heart Study, we demonstrate variable and partial dependence using the ORSF and highlight characteristics of its ten-year predicted risk function for atherosclerotic cardiovascular disease events (ASCVD; stroke, coronary heart disease). We present visualizations comparing variable and partial effect estimation according to the ORSF, the conditional inference forest, and the Pooled Cohort Risk equations. The obliqueRSF R package, which provides functions to fit the ORSF and create variable and partial dependence plots, is available on the comprehensive R archive network (CRAN). Full Article
vi A hidden Markov model approach to characterizing the photo-switching behavior of fluorophores By projecteuclid.org Published On :: Wed, 16 Oct 2019 22:03 EDT Lekha Patel, Nils Gustafsson, Yu Lin, Raimund Ober, Ricardo Henriques, Edward Cohen. Source: The Annals of Applied Statistics, Volume 13, Number 3, 1397--1429.Abstract: Fluorescing molecules (fluorophores) that stochastically switch between photon-emitting and dark states underpin some of the most celebrated advancements in super-resolution microscopy. While this stochastic behavior has been heavily exploited, full characterization of the underlying models can potentially drive forward further imaging methodologies. Under the assumption that fluorophores move between fluorescing and dark states as continuous time Markov processes, the goal is to use a sequence of images to select a model and estimate the transition rates. We use a hidden Markov model to relate the observed discrete time signal to the hidden continuous time process. With imaging involving several repeat exposures of the fluorophore, we show the observed signal depends on both the current and past states of the hidden process, producing emission probabilities that depend on the transition rate parameters to be estimated. To tackle this unusual coupling of the transition and emission probabilities, we conceive transmission (transition-emission) matrices that capture all dependencies of the model. We provide a scheme of computing these matrices and adapt the forward-backward algorithm to compute a likelihood which is readily optimized to provide rate estimates. When confronted with several model proposals, combining this procedure with the Bayesian Information Criterion provides accurate model selection. Full Article
vi Stratonovich type integration with respect to fractional Brownian motion with Hurst parameter less than $1/2$ By projecteuclid.org Published On :: Mon, 27 Apr 2020 04:02 EDT Jorge A. León. Source: Bernoulli, Volume 26, Number 3, 2436--2462.Abstract: Let $B^{H}$ be a fractional Brownian motion with Hurst parameter $Hin (0,1/2)$ and $p:mathbb{R} ightarrow mathbb{R}$ a polynomial function. The main purpose of this paper is to introduce a Stratonovich type stochastic integral with respect to $B^{H}$, whose domain includes the process $p(B^{H})$. That is, an integral that allows us to integrate $p(B^{H})$ with respect to $B^{H}$, which does not happen with the symmetric integral given by Russo and Vallois ( Probab. Theory Related Fields 97 (1993) 403–421) in general. Towards this end, we combine the approaches utilized by León and Nualart ( Stochastic Process. Appl. 115 (2005) 481–492), and Russo and Vallois ( Probab. Theory Related Fields 97 (1993) 403–421), whose aims are to extend the domain of the divergence operator for Gaussian processes and to define some stochastic integrals, respectively. Then, we study the relation between this Stratonovich integral and the extension of the divergence operator (see León and Nualart ( Stochastic Process. Appl. 115 (2005) 481–492)), an Itô formula and the existence of a unique solution of some Stratonovich stochastic differential equations. These last results have been analyzed by Alòs, León and Nualart ( Taiwanese J. Math. 5 (2001) 609–632), where the Hurst paramert $H$ belongs to the interval $(1/4,1/2)$. Full Article
vi A refined Cramér-type moderate deviation for sums of local statistics By projecteuclid.org Published On :: Mon, 27 Apr 2020 04:02 EDT Xiao Fang, Li Luo, Qi-Man Shao. Source: Bernoulli, Volume 26, Number 3, 2319--2352.Abstract: We prove a refined Cramér-type moderate deviation result by taking into account of the skewness in normal approximation for sums of local statistics of independent random variables. We apply the main result to $k$-runs, U-statistics and subgraph counts in the Erdős–Rényi random graph. To prove our main result, we develop exponential concentration inequalities and higher-order tail probability expansions via Stein’s method. Full Article
vi First-order covariance inequalities via Stein’s method By projecteuclid.org Published On :: Mon, 27 Apr 2020 04:02 EDT Marie Ernst, Gesine Reinert, Yvik Swan. Source: Bernoulli, Volume 26, Number 3, 2051--2081.Abstract: We propose probabilistic representations for inverse Stein operators (i.e., solutions to Stein equations) under general conditions; in particular, we deduce new simple expressions for the Stein kernel. These representations allow to deduce uniform and nonuniform Stein factors (i.e., bounds on solutions to Stein equations) and lead to new covariance identities expressing the covariance between arbitrary functionals of an arbitrary univariate target in terms of a weighted covariance of the derivatives of the functionals. Our weights are explicit, easily computable in most cases and expressed in terms of objects familiar within the context of Stein’s method. Applications of the Cauchy–Schwarz inequality to these weighted covariance identities lead to sharp upper and lower covariance bounds and, in particular, weighted Poincaré inequalities. Many examples are given and, in particular, classical variance bounds due to Klaassen, Brascamp and Lieb or Otto and Menz are corollaries. Connections with more recent literature are also detailed. Full Article
vi On sampling from a log-concave density using kinetic Langevin diffusions By projecteuclid.org Published On :: Mon, 27 Apr 2020 04:02 EDT Arnak S. Dalalyan, Lionel Riou-Durand. Source: Bernoulli, Volume 26, Number 3, 1956--1988.Abstract: Langevin diffusion processes and their discretizations are often used for sampling from a target density. The most convenient framework for assessing the quality of such a sampling scheme corresponds to smooth and strongly log-concave densities defined on $mathbb{R}^{p}$. The present work focuses on this framework and studies the behavior of the Monte Carlo algorithm based on discretizations of the kinetic Langevin diffusion. We first prove the geometric mixing property of the kinetic Langevin diffusion with a mixing rate that is optimal in terms of its dependence on the condition number. We then use this result for obtaining improved guarantees of sampling using the kinetic Langevin Monte Carlo method, when the quality of sampling is measured by the Wasserstein distance. We also consider the situation where the Hessian of the log-density of the target distribution is Lipschitz-continuous. In this case, we introduce a new discretization of the kinetic Langevin diffusion and prove that this leads to a substantial improvement of the upper bound on the sampling error measured in Wasserstein distance. Full Article
vi Optimal functional supervised classification with separation condition By projecteuclid.org Published On :: Mon, 27 Apr 2020 04:02 EDT Sébastien Gadat, Sébastien Gerchinovitz, Clément Marteau. Source: Bernoulli, Volume 26, Number 3, 1797--1831.Abstract: We consider the binary supervised classification problem with the Gaussian functional model introduced in ( Math. Methods Statist. 22 (2013) 213–225). Taking advantage of the Gaussian structure, we design a natural plug-in classifier and derive a family of upper bounds on its worst-case excess risk over Sobolev spaces. These bounds are parametrized by a separation distance quantifying the difficulty of the problem, and are proved to be optimal (up to logarithmic factors) through matching minimax lower bounds. Using the recent works of (In Advances in Neural Information Processing Systems (2014) 3437–3445 Curran Associates) and ( Ann. Statist. 44 (2016) 982–1009), we also derive a logarithmic lower bound showing that the popular $k$-nearest neighbors classifier is far from optimality in this specific functional setting. Full Article
vi Estimating the number of connected components in a graph via subgraph sampling By projecteuclid.org Published On :: Mon, 27 Apr 2020 04:02 EDT Jason M. Klusowski, Yihong Wu. Source: Bernoulli, Volume 26, Number 3, 1635--1664.Abstract: Learning properties of large graphs from samples has been an important problem in statistical network analysis since the early work of Goodman ( Ann. Math. Stat. 20 (1949) 572–579) and Frank ( Scand. J. Stat. 5 (1978) 177–188). We revisit a problem formulated by Frank ( Scand. J. Stat. 5 (1978) 177–188) of estimating the number of connected components in a large graph based on the subgraph sampling model, in which we randomly sample a subset of the vertices and observe the induced subgraph. The key question is whether accurate estimation is achievable in the sublinear regime where only a vanishing fraction of the vertices are sampled. We show that it is impossible if the parent graph is allowed to contain high-degree vertices or long induced cycles. For the class of chordal graphs, where induced cycles of length four or above are forbidden, we characterize the optimal sample complexity within constant factors and construct linear-time estimators that provably achieve these bounds. This significantly expands the scope of previous results which have focused on unbiased estimators and special classes of graphs such as forests or cliques. Both the construction and the analysis of the proposed methodology rely on combinatorial properties of chordal graphs and identities of induced subgraph counts. They, in turn, also play a key role in proving minimax lower bounds based on construction of random instances of graphs with matching structures of small subgraphs. Full Article
vi Stratonovich stochastic differential equation with irregular coefficients: Girsanov’s example revisited By projecteuclid.org Published On :: Fri, 31 Jan 2020 04:06 EST Ilya Pavlyukevich, Georgiy Shevchenko. Source: Bernoulli, Volume 26, Number 2, 1381--1409.Abstract: In this paper, we study the Stratonovich stochastic differential equation $mathrm{d}X=|X|^{alpha }circ mathrm{d}B$, $alpha in (-1,1)$, which has been introduced by Cherstvy et al. ( New J. Phys. 15 (2013) 083039) in the context of analysis of anomalous diffusions in heterogeneous media. We determine its weak and strong solutions, which are homogeneous strong Markov processes spending zero time at $0$: for $alpha in (0,1)$, these solutions have the form egin{equation*}X_{t}^{ heta }=((1-alpha)B_{t}^{ heta })^{1/(1-alpha )},end{equation*} where $B^{ heta }$ is the $ heta $-skew Brownian motion driven by $B$ and starting at $frac{1}{1-alpha }(X_{0})^{1-alpha }$, $ heta in [-1,1]$, and $(x)^{gamma }=|x|^{gamma }operatorname{sign}x$; for $alpha in (-1,0]$, only the case $ heta =0$ is possible. The central part of the paper consists in the proof of the existence of a quadratic covariation $[f(B^{ heta }),B]$ for a locally square integrable function $f$ and is based on the time-reversion technique for Markovian diffusions. Full Article
vi Robust regression via mutivariate regression depth By projecteuclid.org Published On :: Fri, 31 Jan 2020 04:06 EST Chao Gao. Source: Bernoulli, Volume 26, Number 2, 1139--1170.Abstract: This paper studies robust regression in the settings of Huber’s $epsilon$-contamination models. We consider estimators that are maximizers of multivariate regression depth functions. These estimators are shown to achieve minimax rates in the settings of $epsilon$-contamination models for various regression problems including nonparametric regression, sparse linear regression, reduced rank regression, etc. We also discuss a general notion of depth function for linear operators that has potential applications in robust functional linear regression. Full Article
vi Distances and large deviations in the spatial preferential attachment model By projecteuclid.org Published On :: Fri, 31 Jan 2020 04:06 EST Christian Hirsch, Christian Mönch. Source: Bernoulli, Volume 26, Number 2, 927--947.Abstract: This paper considers two asymptotic properties of a spatial preferential-attachment model introduced by E. Jacob and P. Mörters (In Algorithms and Models for the Web Graph (2013) 14–25 Springer). First, in a regime of strong linear reinforcement, we show that typical distances are at most of doubly-logarithmic order. Second, we derive a large deviation principle for the empirical neighbourhood structure and express the rate function as solution to an entropy minimisation problem in the space of stationary marked point processes. Full Article
vi A Feynman–Kac result via Markov BSDEs with generalised drivers By projecteuclid.org Published On :: Tue, 26 Nov 2019 04:00 EST Elena Issoglio, Francesco Russo. Source: Bernoulli, Volume 26, Number 1, 728--766.Abstract: In this paper, we investigate BSDEs where the driver contains a distributional term (in the sense of generalised functions) and derive general Feynman–Kac formulae related to these BSDEs. We introduce an integral operator to give sense to the equation and then we show the existence of a strong solution employing results on a related PDE. Due to the irregularity of the driver, the $Y$-component of a couple $(Y,Z)$ solving the BSDE is not necessarily a semimartingale but a weak Dirichlet process. Full Article
vi Consistent semiparametric estimators for recurrent event times models with application to virtual age models By projecteuclid.org Published On :: Tue, 26 Nov 2019 04:00 EST Eric Beutner, Laurent Bordes, Laurent Doyen. Source: Bernoulli, Volume 26, Number 1, 557--586.Abstract: Virtual age models are very useful to analyse recurrent events. Among the strengths of these models is their ability to account for treatment (or intervention) effects after an event occurrence. Despite their flexibility for modeling recurrent events, the number of applications is limited. This seems to be a result of the fact that in the semiparametric setting all the existing results assume the virtual age function that describes the treatment (or intervention) effects to be known. This shortcoming can be overcome by considering semiparametric virtual age models with parametrically specified virtual age functions. Yet, fitting such a model is a difficult task. Indeed, it has recently been shown that for these models the standard profile likelihood method fails to lead to consistent estimators. Here we show that consistent estimators can be constructed by smoothing the profile log-likelihood function appropriately. We show that our general result can be applied to most of the relevant virtual age models of the literature. Our approach shows that empirical process techniques may be a worthwhile alternative to martingale methods for studying asymptotic properties of these inference methods. A simulation study is provided to illustrate our consistency results together with an application to real data. Full Article
vi Operator-scaling Gaussian random fields via aggregation By projecteuclid.org Published On :: Tue, 26 Nov 2019 04:00 EST Yi Shen, Yizao Wang. Source: Bernoulli, Volume 26, Number 1, 500--530.Abstract: We propose an aggregated random-field model, and investigate the scaling limits of the aggregated partial-sum random fields. In this model, each copy in the aggregation is a $pm 1$-valued random field built from two correlated one-dimensional random walks, the law of each determined by a random persistence parameter. A flexible joint distribution of the two parameters is introduced, and given the parameters the two correlated random walks are conditionally independent. For the aggregated random field, when the persistence parameters are independent, the scaling limit is a fractional Brownian sheet. When the persistence parameters are tail-dependent, characterized in the framework of multivariate regular variation, the scaling limit is more delicate, and in particular depends on the growth rates of the underlying rectangular region along two directions: at different rates different operator-scaling Gaussian random fields appear as the region area tends to infinity. In particular, at the so-called critical speed, a large family of Gaussian random fields with long-range dependence arise in the limit. We also identify four different regimes at non-critical speed where fractional Brownian sheets arise in the limit. Full Article
vi English given names : popularity, spelling variants, diminutives and abbreviations / by Carol Baxter. By www.catalog.slsa.sa.gov.au Published On :: Names, Personal -- England. Full Article
vi Welsh given names : popularity, spelling variants, diminutives and abbreviations / by Carol Baxter. By www.catalog.slsa.sa.gov.au Published On :: Names, Personal -- Welsh. Full Article
vi Scottish given names : popularity, spelling variants, diminutives and abbreviations / by Carol Baxter. By www.catalog.slsa.sa.gov.au Published On :: Names, Personal -- Scottish. Full Article